00:00:00.001 Started by upstream project "autotest-per-patch" build number 122875 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.031 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.032 The recommended git tool is: git 00:00:00.032 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.097 Using shallow fetch with depth 1 00:00:00.098 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.098 > git --version # timeout=10 00:00:00.154 > git --version # 'git version 2.39.2' 00:00:00.154 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.155 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.155 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.436 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.447 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.459 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:03.459 > git config core.sparsecheckout # timeout=10 00:00:03.470 > git read-tree -mu HEAD # timeout=10 00:00:03.485 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:03.503 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:03.503 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:03.579 [Pipeline] Start of Pipeline 00:00:03.589 [Pipeline] library 00:00:03.590 Loading library shm_lib@master 00:00:03.590 Library shm_lib@master is cached. Copying from home. 00:00:03.603 [Pipeline] node 00:00:03.609 Running on FCP03 in /var/jenkins/workspace/dsa-phy-autotest 00:00:03.613 [Pipeline] { 00:00:03.623 [Pipeline] catchError 00:00:03.624 [Pipeline] { 00:00:03.637 [Pipeline] wrap 00:00:03.648 [Pipeline] { 00:00:03.657 [Pipeline] stage 00:00:03.659 [Pipeline] { (Prologue) 00:00:03.843 [Pipeline] sh 00:00:04.129 + logger -p user.info -t JENKINS-CI 00:00:04.147 [Pipeline] echo 00:00:04.148 Node: FCP03 00:00:04.155 [Pipeline] sh 00:00:04.454 [Pipeline] setCustomBuildProperty 00:00:04.465 [Pipeline] echo 00:00:04.466 Cleanup processes 00:00:04.471 [Pipeline] sh 00:00:04.752 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.752 2351947 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.762 [Pipeline] sh 00:00:05.042 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:05.042 ++ grep -v 'sudo pgrep' 00:00:05.042 ++ awk '{print $1}' 00:00:05.042 + sudo kill -9 00:00:05.042 + true 00:00:05.056 [Pipeline] cleanWs 00:00:05.066 [WS-CLEANUP] Deleting project workspace... 00:00:05.066 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.073 [WS-CLEANUP] done 00:00:05.077 [Pipeline] setCustomBuildProperty 00:00:05.090 [Pipeline] sh 00:00:05.375 + sudo git config --global --replace-all safe.directory '*' 00:00:05.444 [Pipeline] nodesByLabel 00:00:05.446 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.453 [Pipeline] httpRequest 00:00:05.458 HttpMethod: GET 00:00:05.458 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.462 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.470 Response Code: HTTP/1.1 200 OK 00:00:05.471 Success: Status code 200 is in the accepted range: 200,404 00:00:05.471 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.420 [Pipeline] sh 00:00:06.705 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.723 [Pipeline] httpRequest 00:00:06.728 HttpMethod: GET 00:00:06.729 URL: http://10.211.164.101/packages/spdk_0e4f7fc9ba88308820bc4a1b6d388d42c1f4c5b0.tar.gz 00:00:06.729 Sending request to url: http://10.211.164.101/packages/spdk_0e4f7fc9ba88308820bc4a1b6d388d42c1f4c5b0.tar.gz 00:00:06.739 Response Code: HTTP/1.1 200 OK 00:00:06.740 Success: Status code 200 is in the accepted range: 200,404 00:00:06.740 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_0e4f7fc9ba88308820bc4a1b6d388d42c1f4c5b0.tar.gz 00:00:34.569 [Pipeline] sh 00:00:34.860 + tar --no-same-owner -xf spdk_0e4f7fc9ba88308820bc4a1b6d388d42c1f4c5b0.tar.gz 00:00:37.424 [Pipeline] sh 00:00:37.706 + git -C spdk log --oneline -n5 00:00:37.706 0e4f7fc9b blob: add blob set parent 00:00:37.706 4506c0c36 test/common: Enable inherit_errexit 00:00:37.706 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:37.706 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:37.706 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:37.720 [Pipeline] } 00:00:37.739 [Pipeline] // stage 00:00:37.749 [Pipeline] stage 00:00:37.751 [Pipeline] { (Prepare) 00:00:37.769 [Pipeline] writeFile 00:00:37.788 [Pipeline] sh 00:00:38.075 + logger -p user.info -t JENKINS-CI 00:00:38.089 [Pipeline] sh 00:00:38.377 + logger -p user.info -t JENKINS-CI 00:00:38.393 [Pipeline] sh 00:00:38.683 + cat autorun-spdk.conf 00:00:38.683 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.683 SPDK_TEST_ACCEL_DSA=1 00:00:38.683 SPDK_TEST_ACCEL_IAA=1 00:00:38.683 SPDK_TEST_NVMF=1 00:00:38.683 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.683 SPDK_RUN_ASAN=1 00:00:38.683 SPDK_RUN_UBSAN=1 00:00:38.690 RUN_NIGHTLY=0 00:00:38.695 [Pipeline] readFile 00:00:38.724 [Pipeline] withEnv 00:00:38.726 [Pipeline] { 00:00:38.742 [Pipeline] sh 00:00:39.028 + set -ex 00:00:39.028 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:00:39.028 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:39.028 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.028 ++ SPDK_TEST_ACCEL_DSA=1 00:00:39.028 ++ SPDK_TEST_ACCEL_IAA=1 00:00:39.028 ++ SPDK_TEST_NVMF=1 00:00:39.028 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.028 ++ SPDK_RUN_ASAN=1 00:00:39.028 ++ SPDK_RUN_UBSAN=1 00:00:39.028 ++ RUN_NIGHTLY=0 00:00:39.028 + case $SPDK_TEST_NVMF_NICS in 00:00:39.028 + DRIVERS= 00:00:39.028 + [[ -n '' ]] 00:00:39.028 + exit 0 00:00:39.038 [Pipeline] } 00:00:39.055 [Pipeline] // withEnv 00:00:39.061 [Pipeline] } 00:00:39.080 [Pipeline] // stage 00:00:39.089 [Pipeline] catchError 00:00:39.091 [Pipeline] { 00:00:39.105 [Pipeline] timeout 00:00:39.106 Timeout set to expire in 50 min 00:00:39.107 [Pipeline] { 00:00:39.117 [Pipeline] stage 00:00:39.118 [Pipeline] { (Tests) 00:00:39.130 [Pipeline] sh 00:00:39.418 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:00:39.418 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:00:39.418 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:00:39.418 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:00:39.418 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:39.418 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:00:39.418 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:00:39.418 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:39.418 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:00:39.418 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:39.418 + cd /var/jenkins/workspace/dsa-phy-autotest 00:00:39.418 + source /etc/os-release 00:00:39.418 ++ NAME='Fedora Linux' 00:00:39.418 ++ VERSION='38 (Cloud Edition)' 00:00:39.418 ++ ID=fedora 00:00:39.418 ++ VERSION_ID=38 00:00:39.418 ++ VERSION_CODENAME= 00:00:39.418 ++ PLATFORM_ID=platform:f38 00:00:39.418 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:39.418 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.418 ++ LOGO=fedora-logo-icon 00:00:39.418 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:39.418 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.418 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:39.418 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.418 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.418 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.418 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:39.418 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.418 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:39.418 ++ SUPPORT_END=2024-05-14 00:00:39.418 ++ VARIANT='Cloud Edition' 00:00:39.418 ++ VARIANT_ID=cloud 00:00:39.418 + uname -a 00:00:39.418 Linux spdk-fcp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:39.418 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:00:42.082 Hugepages 00:00:42.082 node hugesize free / total 00:00:42.082 node0 1048576kB 0 / 0 00:00:42.082 node0 2048kB 0 / 0 00:00:42.082 node1 1048576kB 0 / 0 00:00:42.082 node1 2048kB 0 / 0 00:00:42.082 00:00:42.082 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:42.082 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:00:42.082 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:00:42.082 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:00:42.082 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:00:42.082 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:00:42.082 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:00:42.082 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:00:42.082 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:00:42.082 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:00:42.082 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:00:42.082 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:00:42.082 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:00:42.082 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:00:42.082 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:00:42.082 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:00:42.082 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:00:42.082 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:00:42.082 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:00:42.082 + rm -f /tmp/spdk-ld-path 00:00:42.082 + source autorun-spdk.conf 00:00:42.082 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.082 ++ SPDK_TEST_ACCEL_DSA=1 00:00:42.082 ++ SPDK_TEST_ACCEL_IAA=1 00:00:42.082 ++ SPDK_TEST_NVMF=1 00:00:42.082 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.082 ++ SPDK_RUN_ASAN=1 00:00:42.082 ++ SPDK_RUN_UBSAN=1 00:00:42.082 ++ RUN_NIGHTLY=0 00:00:42.082 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.082 + [[ -n '' ]] 00:00:42.082 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:42.082 + for M in /var/spdk/build-*-manifest.txt 00:00:42.082 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.082 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:42.082 + for M in /var/spdk/build-*-manifest.txt 00:00:42.082 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.082 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:42.082 ++ uname 00:00:42.082 + [[ Linux == \L\i\n\u\x ]] 00:00:42.082 + sudo dmesg -T 00:00:42.344 + sudo dmesg --clear 00:00:42.344 + dmesg_pid=2352981 00:00:42.344 + [[ Fedora Linux == FreeBSD ]] 00:00:42.344 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.344 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.344 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.344 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.344 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.344 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.344 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.344 + sudo dmesg -Tw 00:00:42.344 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.344 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.344 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.344 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.344 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.344 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.344 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.344 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:42.344 Test configuration: 00:00:42.344 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.344 SPDK_TEST_ACCEL_DSA=1 00:00:42.344 SPDK_TEST_ACCEL_IAA=1 00:00:42.344 SPDK_TEST_NVMF=1 00:00:42.344 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.344 SPDK_RUN_ASAN=1 00:00:42.344 SPDK_RUN_UBSAN=1 00:00:42.344 RUN_NIGHTLY=0 10:19:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:00:42.344 10:19:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.344 10:19:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.344 10:19:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.344 10:19:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.344 10:19:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.344 10:19:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.344 10:19:58 -- paths/export.sh@5 -- $ export PATH 00:00:42.344 10:19:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.344 10:19:58 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:00:42.344 10:19:58 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:42.344 10:19:58 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715761198.XXXXXX 00:00:42.344 10:19:58 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715761198.O82HWm 00:00:42.344 10:19:58 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:42.344 10:19:58 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:42.344 10:19:58 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:00:42.344 10:19:58 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.344 10:19:58 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.344 10:19:58 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:42.344 10:19:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:42.344 10:19:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.344 10:19:58 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:42.344 10:19:58 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:42.344 10:19:58 -- pm/common@17 -- $ local monitor 00:00:42.344 10:19:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.344 10:19:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.344 10:19:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.344 10:19:58 -- pm/common@21 -- $ date +%s 00:00:42.344 10:19:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.344 10:19:58 -- pm/common@25 -- $ sleep 1 00:00:42.344 10:19:58 -- pm/common@21 -- $ date +%s 00:00:42.344 10:19:58 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715761198 00:00:42.344 10:19:58 -- pm/common@21 -- $ date +%s 00:00:42.344 10:19:58 -- pm/common@21 -- $ date +%s 00:00:42.344 10:19:58 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715761198 00:00:42.344 10:19:58 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715761198 00:00:42.344 10:19:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715761198 00:00:42.344 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715761198_collect-cpu-load.pm.log 00:00:42.344 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715761198_collect-vmstat.pm.log 00:00:42.344 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715761198_collect-cpu-temp.pm.log 00:00:42.344 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715761198_collect-bmc-pm.bmc.pm.log 00:00:43.285 10:19:59 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:43.285 10:19:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.285 10:19:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.285 10:19:59 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:43.285 10:19:59 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.285 Wed May 15 08:19:59 AM UTC 2024 00:00:43.285 10:19:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.285 v24.05-pre-659-g0e4f7fc9b 00:00:43.285 10:19:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:43.285 10:19:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:43.285 10:19:59 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:43.285 10:19:59 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:43.285 10:19:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.285 ************************************ 00:00:43.285 START TEST asan 00:00:43.285 ************************************ 00:00:43.285 10:19:59 asan -- common/autotest_common.sh@1122 -- $ echo 'using asan' 00:00:43.285 using asan 00:00:43.285 00:00:43.285 real 0m0.000s 00:00:43.285 user 0m0.000s 00:00:43.285 sys 0m0.000s 00:00:43.285 10:19:59 asan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:43.285 10:19:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.285 ************************************ 00:00:43.285 END TEST asan 00:00:43.285 ************************************ 00:00:43.546 10:19:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.546 10:19:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.546 10:19:59 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:43.546 10:19:59 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:43.546 10:19:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.546 ************************************ 00:00:43.546 START TEST ubsan 00:00:43.546 ************************************ 00:00:43.546 10:19:59 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:43.546 using ubsan 00:00:43.546 00:00:43.546 real 0m0.000s 00:00:43.546 user 0m0.000s 00:00:43.546 sys 0m0.000s 00:00:43.546 10:19:59 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:43.546 10:19:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.546 ************************************ 00:00:43.546 END TEST ubsan 00:00:43.546 ************************************ 00:00:43.546 10:19:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.546 10:19:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.546 10:19:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.546 10:19:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.546 10:19:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.546 10:19:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.546 10:19:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.546 10:19:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.546 10:19:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:43.546 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:00:43.546 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:00:43.807 Using 'verbs' RDMA provider 00:00:56.976 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.971 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.971 Creating mk/config.mk...done. 00:01:06.971 Creating mk/cc.flags.mk...done. 00:01:06.971 Type 'make' to build. 00:01:06.971 10:20:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:06.971 10:20:22 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:06.971 10:20:22 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:06.971 10:20:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.971 ************************************ 00:01:06.971 START TEST make 00:01:06.971 ************************************ 00:01:06.971 10:20:22 make -- common/autotest_common.sh@1122 -- $ make -j128 00:01:06.971 make[1]: Nothing to be done for 'all'. 00:01:12.242 The Meson build system 00:01:12.242 Version: 1.3.1 00:01:12.242 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:12.242 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:12.242 Build type: native build 00:01:12.242 Program cat found: YES (/usr/bin/cat) 00:01:12.242 Project name: DPDK 00:01:12.242 Project version: 23.11.0 00:01:12.242 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.242 C linker for the host machine: cc ld.bfd 2.39-16 00:01:12.242 Host machine cpu family: x86_64 00:01:12.242 Host machine cpu: x86_64 00:01:12.242 Message: ## Building in Developer Mode ## 00:01:12.242 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:12.242 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:12.242 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:12.242 Program python3 found: YES (/usr/bin/python3) 00:01:12.242 Program cat found: YES (/usr/bin/cat) 00:01:12.242 Compiler for C supports arguments -march=native: YES 00:01:12.242 Checking for size of "void *" : 8 00:01:12.242 Checking for size of "void *" : 8 (cached) 00:01:12.242 Library m found: YES 00:01:12.242 Library numa found: YES 00:01:12.242 Has header "numaif.h" : YES 00:01:12.242 Library fdt found: NO 00:01:12.242 Library execinfo found: NO 00:01:12.242 Has header "execinfo.h" : YES 00:01:12.242 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.242 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:12.242 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:12.242 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:12.242 Run-time dependency openssl found: YES 3.0.9 00:01:12.242 Run-time dependency libpcap found: YES 1.10.4 00:01:12.242 Has header "pcap.h" with dependency libpcap: YES 00:01:12.242 Compiler for C supports arguments -Wcast-qual: YES 00:01:12.242 Compiler for C supports arguments -Wdeprecated: YES 00:01:12.242 Compiler for C supports arguments -Wformat: YES 00:01:12.242 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:12.242 Compiler for C supports arguments -Wformat-security: NO 00:01:12.242 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.242 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:12.242 Compiler for C supports arguments -Wnested-externs: YES 00:01:12.242 Compiler for C supports arguments -Wold-style-definition: YES 00:01:12.242 Compiler for C supports arguments -Wpointer-arith: YES 00:01:12.242 Compiler for C supports arguments -Wsign-compare: YES 00:01:12.242 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:12.242 Compiler for C supports arguments -Wundef: YES 00:01:12.242 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.242 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:12.242 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:12.242 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.242 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:12.242 Program objdump found: YES (/usr/bin/objdump) 00:01:12.242 Compiler for C supports arguments -mavx512f: YES 00:01:12.242 Checking if "AVX512 checking" compiles: YES 00:01:12.242 Fetching value of define "__SSE4_2__" : 1 00:01:12.242 Fetching value of define "__AES__" : 1 00:01:12.242 Fetching value of define "__AVX__" : 1 00:01:12.242 Fetching value of define "__AVX2__" : 1 00:01:12.242 Fetching value of define "__AVX512BW__" : 1 00:01:12.242 Fetching value of define "__AVX512CD__" : 1 00:01:12.242 Fetching value of define "__AVX512DQ__" : 1 00:01:12.242 Fetching value of define "__AVX512F__" : 1 00:01:12.242 Fetching value of define "__AVX512VL__" : 1 00:01:12.242 Fetching value of define "__PCLMUL__" : 1 00:01:12.242 Fetching value of define "__RDRND__" : 1 00:01:12.242 Fetching value of define "__RDSEED__" : 1 00:01:12.242 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:12.242 Fetching value of define "__znver1__" : (undefined) 00:01:12.242 Fetching value of define "__znver2__" : (undefined) 00:01:12.242 Fetching value of define "__znver3__" : (undefined) 00:01:12.242 Fetching value of define "__znver4__" : (undefined) 00:01:12.242 Library asan found: YES 00:01:12.242 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:12.242 Message: lib/log: Defining dependency "log" 00:01:12.242 Message: lib/kvargs: Defining dependency "kvargs" 00:01:12.242 Message: lib/telemetry: Defining dependency "telemetry" 00:01:12.242 Library rt found: YES 00:01:12.242 Checking for function "getentropy" : NO 00:01:12.242 Message: lib/eal: Defining dependency "eal" 00:01:12.242 Message: lib/ring: Defining dependency "ring" 00:01:12.242 Message: lib/rcu: Defining dependency "rcu" 00:01:12.242 Message: lib/mempool: Defining dependency "mempool" 00:01:12.242 Message: lib/mbuf: Defining dependency "mbuf" 00:01:12.242 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:12.242 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.242 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.242 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.242 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:12.242 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:12.242 Compiler for C supports arguments -mpclmul: YES 00:01:12.242 Compiler for C supports arguments -maes: YES 00:01:12.242 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:12.242 Compiler for C supports arguments -mavx512bw: YES 00:01:12.242 Compiler for C supports arguments -mavx512dq: YES 00:01:12.242 Compiler for C supports arguments -mavx512vl: YES 00:01:12.242 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:12.242 Compiler for C supports arguments -mavx2: YES 00:01:12.242 Compiler for C supports arguments -mavx: YES 00:01:12.242 Message: lib/net: Defining dependency "net" 00:01:12.242 Message: lib/meter: Defining dependency "meter" 00:01:12.242 Message: lib/ethdev: Defining dependency "ethdev" 00:01:12.242 Message: lib/pci: Defining dependency "pci" 00:01:12.242 Message: lib/cmdline: Defining dependency "cmdline" 00:01:12.242 Message: lib/hash: Defining dependency "hash" 00:01:12.242 Message: lib/timer: Defining dependency "timer" 00:01:12.242 Message: lib/compressdev: Defining dependency "compressdev" 00:01:12.242 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:12.242 Message: lib/dmadev: Defining dependency "dmadev" 00:01:12.242 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:12.242 Message: lib/power: Defining dependency "power" 00:01:12.242 Message: lib/reorder: Defining dependency "reorder" 00:01:12.242 Message: lib/security: Defining dependency "security" 00:01:12.242 Has header "linux/userfaultfd.h" : YES 00:01:12.242 Has header "linux/vduse.h" : YES 00:01:12.242 Message: lib/vhost: Defining dependency "vhost" 00:01:12.242 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:12.242 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:12.242 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:12.242 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:12.242 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:12.242 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:12.242 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:12.242 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:12.242 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:12.242 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:12.242 Program doxygen found: YES (/usr/bin/doxygen) 00:01:12.242 Configuring doxy-api-html.conf using configuration 00:01:12.242 Configuring doxy-api-man.conf using configuration 00:01:12.242 Program mandb found: YES (/usr/bin/mandb) 00:01:12.242 Program sphinx-build found: NO 00:01:12.242 Configuring rte_build_config.h using configuration 00:01:12.242 Message: 00:01:12.242 ================= 00:01:12.242 Applications Enabled 00:01:12.242 ================= 00:01:12.242 00:01:12.242 apps: 00:01:12.242 00:01:12.242 00:01:12.242 Message: 00:01:12.242 ================= 00:01:12.242 Libraries Enabled 00:01:12.242 ================= 00:01:12.242 00:01:12.242 libs: 00:01:12.242 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:12.242 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:12.242 cryptodev, dmadev, power, reorder, security, vhost, 00:01:12.242 00:01:12.242 Message: 00:01:12.242 =============== 00:01:12.242 Drivers Enabled 00:01:12.242 =============== 00:01:12.242 00:01:12.242 common: 00:01:12.242 00:01:12.242 bus: 00:01:12.242 pci, vdev, 00:01:12.242 mempool: 00:01:12.242 ring, 00:01:12.242 dma: 00:01:12.242 00:01:12.242 net: 00:01:12.242 00:01:12.242 crypto: 00:01:12.242 00:01:12.242 compress: 00:01:12.242 00:01:12.242 vdpa: 00:01:12.242 00:01:12.242 00:01:12.242 Message: 00:01:12.242 ================= 00:01:12.242 Content Skipped 00:01:12.243 ================= 00:01:12.243 00:01:12.243 apps: 00:01:12.243 dumpcap: explicitly disabled via build config 00:01:12.243 graph: explicitly disabled via build config 00:01:12.243 pdump: explicitly disabled via build config 00:01:12.243 proc-info: explicitly disabled via build config 00:01:12.243 test-acl: explicitly disabled via build config 00:01:12.243 test-bbdev: explicitly disabled via build config 00:01:12.243 test-cmdline: explicitly disabled via build config 00:01:12.243 test-compress-perf: explicitly disabled via build config 00:01:12.243 test-crypto-perf: explicitly disabled via build config 00:01:12.243 test-dma-perf: explicitly disabled via build config 00:01:12.243 test-eventdev: explicitly disabled via build config 00:01:12.243 test-fib: explicitly disabled via build config 00:01:12.243 test-flow-perf: explicitly disabled via build config 00:01:12.243 test-gpudev: explicitly disabled via build config 00:01:12.243 test-mldev: explicitly disabled via build config 00:01:12.243 test-pipeline: explicitly disabled via build config 00:01:12.243 test-pmd: explicitly disabled via build config 00:01:12.243 test-regex: explicitly disabled via build config 00:01:12.243 test-sad: explicitly disabled via build config 00:01:12.243 test-security-perf: explicitly disabled via build config 00:01:12.243 00:01:12.243 libs: 00:01:12.243 metrics: explicitly disabled via build config 00:01:12.243 acl: explicitly disabled via build config 00:01:12.243 bbdev: explicitly disabled via build config 00:01:12.243 bitratestats: explicitly disabled via build config 00:01:12.243 bpf: explicitly disabled via build config 00:01:12.243 cfgfile: explicitly disabled via build config 00:01:12.243 distributor: explicitly disabled via build config 00:01:12.243 efd: explicitly disabled via build config 00:01:12.243 eventdev: explicitly disabled via build config 00:01:12.243 dispatcher: explicitly disabled via build config 00:01:12.243 gpudev: explicitly disabled via build config 00:01:12.243 gro: explicitly disabled via build config 00:01:12.243 gso: explicitly disabled via build config 00:01:12.243 ip_frag: explicitly disabled via build config 00:01:12.243 jobstats: explicitly disabled via build config 00:01:12.243 latencystats: explicitly disabled via build config 00:01:12.243 lpm: explicitly disabled via build config 00:01:12.243 member: explicitly disabled via build config 00:01:12.243 pcapng: explicitly disabled via build config 00:01:12.243 rawdev: explicitly disabled via build config 00:01:12.243 regexdev: explicitly disabled via build config 00:01:12.243 mldev: explicitly disabled via build config 00:01:12.243 rib: explicitly disabled via build config 00:01:12.243 sched: explicitly disabled via build config 00:01:12.243 stack: explicitly disabled via build config 00:01:12.243 ipsec: explicitly disabled via build config 00:01:12.243 pdcp: explicitly disabled via build config 00:01:12.243 fib: explicitly disabled via build config 00:01:12.243 port: explicitly disabled via build config 00:01:12.243 pdump: explicitly disabled via build config 00:01:12.243 table: explicitly disabled via build config 00:01:12.243 pipeline: explicitly disabled via build config 00:01:12.243 graph: explicitly disabled via build config 00:01:12.243 node: explicitly disabled via build config 00:01:12.243 00:01:12.243 drivers: 00:01:12.243 common/cpt: not in enabled drivers build config 00:01:12.243 common/dpaax: not in enabled drivers build config 00:01:12.243 common/iavf: not in enabled drivers build config 00:01:12.243 common/idpf: not in enabled drivers build config 00:01:12.243 common/mvep: not in enabled drivers build config 00:01:12.243 common/octeontx: not in enabled drivers build config 00:01:12.243 bus/auxiliary: not in enabled drivers build config 00:01:12.243 bus/cdx: not in enabled drivers build config 00:01:12.243 bus/dpaa: not in enabled drivers build config 00:01:12.243 bus/fslmc: not in enabled drivers build config 00:01:12.243 bus/ifpga: not in enabled drivers build config 00:01:12.243 bus/platform: not in enabled drivers build config 00:01:12.243 bus/vmbus: not in enabled drivers build config 00:01:12.243 common/cnxk: not in enabled drivers build config 00:01:12.243 common/mlx5: not in enabled drivers build config 00:01:12.243 common/nfp: not in enabled drivers build config 00:01:12.243 common/qat: not in enabled drivers build config 00:01:12.243 common/sfc_efx: not in enabled drivers build config 00:01:12.243 mempool/bucket: not in enabled drivers build config 00:01:12.243 mempool/cnxk: not in enabled drivers build config 00:01:12.243 mempool/dpaa: not in enabled drivers build config 00:01:12.243 mempool/dpaa2: not in enabled drivers build config 00:01:12.243 mempool/octeontx: not in enabled drivers build config 00:01:12.243 mempool/stack: not in enabled drivers build config 00:01:12.243 dma/cnxk: not in enabled drivers build config 00:01:12.243 dma/dpaa: not in enabled drivers build config 00:01:12.243 dma/dpaa2: not in enabled drivers build config 00:01:12.243 dma/hisilicon: not in enabled drivers build config 00:01:12.243 dma/idxd: not in enabled drivers build config 00:01:12.243 dma/ioat: not in enabled drivers build config 00:01:12.243 dma/skeleton: not in enabled drivers build config 00:01:12.243 net/af_packet: not in enabled drivers build config 00:01:12.243 net/af_xdp: not in enabled drivers build config 00:01:12.243 net/ark: not in enabled drivers build config 00:01:12.243 net/atlantic: not in enabled drivers build config 00:01:12.243 net/avp: not in enabled drivers build config 00:01:12.243 net/axgbe: not in enabled drivers build config 00:01:12.243 net/bnx2x: not in enabled drivers build config 00:01:12.243 net/bnxt: not in enabled drivers build config 00:01:12.243 net/bonding: not in enabled drivers build config 00:01:12.243 net/cnxk: not in enabled drivers build config 00:01:12.243 net/cpfl: not in enabled drivers build config 00:01:12.243 net/cxgbe: not in enabled drivers build config 00:01:12.243 net/dpaa: not in enabled drivers build config 00:01:12.243 net/dpaa2: not in enabled drivers build config 00:01:12.243 net/e1000: not in enabled drivers build config 00:01:12.243 net/ena: not in enabled drivers build config 00:01:12.243 net/enetc: not in enabled drivers build config 00:01:12.243 net/enetfec: not in enabled drivers build config 00:01:12.243 net/enic: not in enabled drivers build config 00:01:12.243 net/failsafe: not in enabled drivers build config 00:01:12.243 net/fm10k: not in enabled drivers build config 00:01:12.243 net/gve: not in enabled drivers build config 00:01:12.243 net/hinic: not in enabled drivers build config 00:01:12.243 net/hns3: not in enabled drivers build config 00:01:12.243 net/i40e: not in enabled drivers build config 00:01:12.243 net/iavf: not in enabled drivers build config 00:01:12.243 net/ice: not in enabled drivers build config 00:01:12.243 net/idpf: not in enabled drivers build config 00:01:12.243 net/igc: not in enabled drivers build config 00:01:12.243 net/ionic: not in enabled drivers build config 00:01:12.243 net/ipn3ke: not in enabled drivers build config 00:01:12.243 net/ixgbe: not in enabled drivers build config 00:01:12.243 net/mana: not in enabled drivers build config 00:01:12.243 net/memif: not in enabled drivers build config 00:01:12.243 net/mlx4: not in enabled drivers build config 00:01:12.243 net/mlx5: not in enabled drivers build config 00:01:12.243 net/mvneta: not in enabled drivers build config 00:01:12.243 net/mvpp2: not in enabled drivers build config 00:01:12.243 net/netvsc: not in enabled drivers build config 00:01:12.243 net/nfb: not in enabled drivers build config 00:01:12.243 net/nfp: not in enabled drivers build config 00:01:12.243 net/ngbe: not in enabled drivers build config 00:01:12.243 net/null: not in enabled drivers build config 00:01:12.243 net/octeontx: not in enabled drivers build config 00:01:12.243 net/octeon_ep: not in enabled drivers build config 00:01:12.243 net/pcap: not in enabled drivers build config 00:01:12.243 net/pfe: not in enabled drivers build config 00:01:12.243 net/qede: not in enabled drivers build config 00:01:12.243 net/ring: not in enabled drivers build config 00:01:12.243 net/sfc: not in enabled drivers build config 00:01:12.243 net/softnic: not in enabled drivers build config 00:01:12.243 net/tap: not in enabled drivers build config 00:01:12.243 net/thunderx: not in enabled drivers build config 00:01:12.244 net/txgbe: not in enabled drivers build config 00:01:12.244 net/vdev_netvsc: not in enabled drivers build config 00:01:12.244 net/vhost: not in enabled drivers build config 00:01:12.244 net/virtio: not in enabled drivers build config 00:01:12.244 net/vmxnet3: not in enabled drivers build config 00:01:12.244 raw/*: missing internal dependency, "rawdev" 00:01:12.244 crypto/armv8: not in enabled drivers build config 00:01:12.244 crypto/bcmfs: not in enabled drivers build config 00:01:12.244 crypto/caam_jr: not in enabled drivers build config 00:01:12.244 crypto/ccp: not in enabled drivers build config 00:01:12.244 crypto/cnxk: not in enabled drivers build config 00:01:12.244 crypto/dpaa_sec: not in enabled drivers build config 00:01:12.244 crypto/dpaa2_sec: not in enabled drivers build config 00:01:12.244 crypto/ipsec_mb: not in enabled drivers build config 00:01:12.244 crypto/mlx5: not in enabled drivers build config 00:01:12.244 crypto/mvsam: not in enabled drivers build config 00:01:12.244 crypto/nitrox: not in enabled drivers build config 00:01:12.244 crypto/null: not in enabled drivers build config 00:01:12.244 crypto/octeontx: not in enabled drivers build config 00:01:12.244 crypto/openssl: not in enabled drivers build config 00:01:12.244 crypto/scheduler: not in enabled drivers build config 00:01:12.244 crypto/uadk: not in enabled drivers build config 00:01:12.244 crypto/virtio: not in enabled drivers build config 00:01:12.244 compress/isal: not in enabled drivers build config 00:01:12.244 compress/mlx5: not in enabled drivers build config 00:01:12.244 compress/octeontx: not in enabled drivers build config 00:01:12.244 compress/zlib: not in enabled drivers build config 00:01:12.244 regex/*: missing internal dependency, "regexdev" 00:01:12.244 ml/*: missing internal dependency, "mldev" 00:01:12.244 vdpa/ifc: not in enabled drivers build config 00:01:12.244 vdpa/mlx5: not in enabled drivers build config 00:01:12.244 vdpa/nfp: not in enabled drivers build config 00:01:12.244 vdpa/sfc: not in enabled drivers build config 00:01:12.244 event/*: missing internal dependency, "eventdev" 00:01:12.244 baseband/*: missing internal dependency, "bbdev" 00:01:12.244 gpu/*: missing internal dependency, "gpudev" 00:01:12.244 00:01:12.244 00:01:12.501 Build targets in project: 84 00:01:12.501 00:01:12.501 DPDK 23.11.0 00:01:12.501 00:01:12.501 User defined options 00:01:12.501 buildtype : debug 00:01:12.501 default_library : shared 00:01:12.501 libdir : lib 00:01:12.501 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:12.501 b_sanitize : address 00:01:12.501 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:12.501 c_link_args : 00:01:12.501 cpu_instruction_set: native 00:01:12.501 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:12.501 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:12.501 enable_docs : false 00:01:12.501 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:12.501 enable_kmods : false 00:01:12.501 tests : false 00:01:12.501 00:01:12.501 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:12.765 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:12.765 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:12.765 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.765 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:12.765 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:12.765 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:12.765 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:13.035 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:13.036 [8/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:13.036 [9/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:13.036 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:13.036 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:13.036 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:13.036 [13/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:13.036 [14/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:13.036 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:13.036 [16/264] Linking static target lib/librte_kvargs.a 00:01:13.036 [17/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:13.036 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:13.036 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:13.036 [20/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:13.036 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:13.036 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:13.036 [23/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:13.036 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:13.036 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:13.036 [26/264] Linking static target lib/librte_log.a 00:01:13.036 [27/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:13.296 [28/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:13.296 [29/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:13.296 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:13.296 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:13.296 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:13.296 [33/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:13.296 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:13.296 [35/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:13.296 [36/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:13.296 [37/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:13.296 [38/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:13.296 [39/264] Linking static target lib/librte_pci.a 00:01:13.296 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:13.296 [41/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:13.296 [42/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:13.296 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:13.296 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:13.296 [45/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:13.296 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:13.296 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:13.296 [48/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:13.296 [49/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:13.296 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:13.296 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:13.296 [52/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:13.296 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:13.296 [54/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:13.296 [55/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:13.296 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:13.296 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:13.296 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:13.296 [59/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:13.296 [60/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:13.296 [61/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:13.296 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:13.554 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:13.554 [64/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.554 [65/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:13.554 [66/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:13.554 [67/264] Linking static target lib/librte_meter.a 00:01:13.554 [68/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:13.554 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:13.554 [70/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:13.554 [71/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:13.554 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:13.554 [73/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:13.554 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:13.554 [75/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:13.554 [76/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:13.554 [77/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:13.554 [78/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:13.554 [79/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:13.554 [80/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:13.554 [81/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:13.554 [82/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.554 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:13.554 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:13.554 [85/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:13.554 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:13.554 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:13.554 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:13.554 [89/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:13.554 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:13.554 [91/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:13.554 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:13.554 [93/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:13.554 [94/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:13.554 [95/264] Linking static target lib/librte_telemetry.a 00:01:13.554 [96/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:13.554 [97/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:13.554 [98/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:13.554 [99/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:13.554 [100/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:13.554 [101/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:13.554 [102/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:13.554 [103/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:13.554 [104/264] Linking static target lib/librte_ring.a 00:01:13.554 [105/264] Linking static target lib/librte_cmdline.a 00:01:13.554 [106/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.554 [107/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:13.554 [108/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:13.554 [109/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:13.554 [110/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:13.554 [111/264] Linking static target lib/librte_rcu.a 00:01:13.554 [112/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.554 [113/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:13.554 [114/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:13.554 [115/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:13.554 [116/264] Linking target lib/librte_log.so.24.0 00:01:13.554 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:13.554 [118/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:13.554 [119/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:13.554 [120/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:13.554 [121/264] Linking static target lib/librte_timer.a 00:01:13.554 [122/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:13.554 [123/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:13.554 [124/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:13.554 [125/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:13.554 [126/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:13.554 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:13.554 [128/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:13.554 [129/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:13.554 [130/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:13.554 [131/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:13.554 [132/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:13.811 [133/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:13.811 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:13.811 [135/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:13.811 [136/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:13.811 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:13.811 [138/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:13.811 [139/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:13.811 [140/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:13.811 [141/264] Linking static target lib/librte_compressdev.a 00:01:13.811 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:13.811 [143/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:13.811 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:13.811 [145/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:13.811 [146/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:13.811 [147/264] Linking static target lib/librte_dmadev.a 00:01:13.811 [148/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:13.811 [149/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:13.811 [150/264] Linking static target lib/librte_reorder.a 00:01:13.811 [151/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:13.811 [152/264] Linking static target lib/librte_mempool.a 00:01:13.811 [153/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:13.811 [154/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:13.811 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:13.811 [156/264] Linking target lib/librte_kvargs.so.24.0 00:01:13.811 [157/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:13.811 [158/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:13.811 [159/264] Linking static target lib/librte_power.a 00:01:13.811 [160/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.811 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:13.811 [162/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:13.811 [163/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:13.811 [164/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:13.811 [165/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:13.811 [166/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.811 [167/264] Linking static target lib/librte_net.a 00:01:13.811 [168/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:13.811 [169/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:13.811 [170/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:13.811 [171/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:13.811 [172/264] Linking static target lib/librte_eal.a 00:01:13.811 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:13.811 [174/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:13.811 [175/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:13.811 [176/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:13.811 [177/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:13.811 [178/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.811 [179/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:13.811 [180/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.811 [181/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:13.811 [182/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:13.811 [183/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.811 [184/264] Linking target lib/librte_telemetry.so.24.0 00:01:13.811 [185/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.811 [186/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.811 [187/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.811 [188/264] Linking static target drivers/librte_bus_vdev.a 00:01:14.067 [189/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:14.068 [190/264] Linking static target lib/librte_mbuf.a 00:01:14.068 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.068 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:14.068 [193/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:14.068 [194/264] Linking static target lib/librte_security.a 00:01:14.068 [195/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.068 [196/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:14.068 [197/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:14.068 [198/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.068 [199/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:14.068 [200/264] Linking static target lib/librte_hash.a 00:01:14.068 [201/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:14.068 [202/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:14.068 [203/264] Linking static target drivers/librte_bus_pci.a 00:01:14.068 [204/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:14.068 [205/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.068 [206/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.068 [207/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:14.068 [208/264] Linking static target drivers/librte_mempool_ring.a 00:01:14.068 [209/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.068 [210/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.325 [211/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.325 [212/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.325 [213/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:14.325 [214/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.325 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.325 [216/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.325 [217/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.582 [218/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.582 [219/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:14.582 [220/264] Linking static target lib/librte_cryptodev.a 00:01:14.839 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.839 [222/264] Linking static target lib/librte_ethdev.a 00:01:15.402 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.660 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.557 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:17.813 [226/264] Linking static target lib/librte_vhost.a 00:01:19.204 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.133 [228/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.133 [229/264] Linking target lib/librte_eal.so.24.0 00:01:20.133 [230/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:20.391 [231/264] Linking target lib/librte_ring.so.24.0 00:01:20.391 [232/264] Linking target lib/librte_pci.so.24.0 00:01:20.391 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:20.391 [234/264] Linking target lib/librte_dmadev.so.24.0 00:01:20.391 [235/264] Linking target lib/librte_timer.so.24.0 00:01:20.391 [236/264] Linking target lib/librte_meter.so.24.0 00:01:20.391 [237/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.391 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:20.391 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:20.391 [240/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:20.391 [241/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:20.391 [242/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:20.391 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:20.391 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:20.391 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:20.391 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:20.391 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:20.649 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:20.649 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:20.649 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:20.649 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:01:20.649 [252/264] Linking target lib/librte_compressdev.so.24.0 00:01:20.649 [253/264] Linking target lib/librte_net.so.24.0 00:01:20.649 [254/264] Linking target lib/librte_reorder.so.24.0 00:01:20.650 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:20.650 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:20.908 [257/264] Linking target lib/librte_cmdline.so.24.0 00:01:20.908 [258/264] Linking target lib/librte_hash.so.24.0 00:01:20.908 [259/264] Linking target lib/librte_security.so.24.0 00:01:20.908 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:20.908 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:20.908 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:20.908 [263/264] Linking target lib/librte_power.so.24.0 00:01:20.908 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:20.908 INFO: autodetecting backend as ninja 00:01:20.908 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:21.839 CC lib/ut/ut.o 00:01:21.839 CC lib/log/log.o 00:01:21.839 CC lib/log/log_flags.o 00:01:21.839 CC lib/log/log_deprecated.o 00:01:21.839 CC lib/ut_mock/mock.o 00:01:21.839 LIB libspdk_ut_mock.a 00:01:21.839 LIB libspdk_ut.a 00:01:21.839 SO libspdk_ut_mock.so.6.0 00:01:21.839 SO libspdk_ut.so.2.0 00:01:21.839 LIB libspdk_log.a 00:01:21.839 SO libspdk_log.so.7.0 00:01:21.839 SYMLINK libspdk_ut.so 00:01:21.839 SYMLINK libspdk_ut_mock.so 00:01:21.839 SYMLINK libspdk_log.so 00:01:22.098 CC lib/dma/dma.o 00:01:22.098 CXX lib/trace_parser/trace.o 00:01:22.098 CC lib/util/base64.o 00:01:22.098 CC lib/util/bit_array.o 00:01:22.098 CC lib/util/cpuset.o 00:01:22.098 CC lib/util/crc64.o 00:01:22.098 CC lib/ioat/ioat.o 00:01:22.098 CC lib/util/crc16.o 00:01:22.098 CC lib/util/crc32_ieee.o 00:01:22.098 CC lib/util/dif.o 00:01:22.098 CC lib/util/crc32.o 00:01:22.098 CC lib/util/crc32c.o 00:01:22.098 CC lib/util/fd.o 00:01:22.098 CC lib/util/iov.o 00:01:22.098 CC lib/util/file.o 00:01:22.098 CC lib/util/hexlify.o 00:01:22.098 CC lib/util/math.o 00:01:22.098 CC lib/util/strerror_tls.o 00:01:22.098 CC lib/util/pipe.o 00:01:22.098 CC lib/util/string.o 00:01:22.098 CC lib/util/fd_group.o 00:01:22.098 CC lib/util/uuid.o 00:01:22.098 CC lib/util/xor.o 00:01:22.098 CC lib/util/zipf.o 00:01:22.356 CC lib/vfio_user/host/vfio_user_pci.o 00:01:22.356 CC lib/vfio_user/host/vfio_user.o 00:01:22.356 LIB libspdk_dma.a 00:01:22.356 SO libspdk_dma.so.4.0 00:01:22.356 SYMLINK libspdk_dma.so 00:01:22.356 LIB libspdk_vfio_user.a 00:01:22.356 SO libspdk_vfio_user.so.5.0 00:01:22.356 LIB libspdk_ioat.a 00:01:22.614 SO libspdk_ioat.so.7.0 00:01:22.614 SYMLINK libspdk_vfio_user.so 00:01:22.614 SYMLINK libspdk_ioat.so 00:01:22.614 LIB libspdk_util.a 00:01:22.614 SO libspdk_util.so.9.0 00:01:22.871 SYMLINK libspdk_util.so 00:01:22.871 LIB libspdk_trace_parser.a 00:01:22.871 SO libspdk_trace_parser.so.5.0 00:01:22.871 SYMLINK libspdk_trace_parser.so 00:01:22.871 CC lib/vmd/vmd.o 00:01:22.871 CC lib/vmd/led.o 00:01:22.871 CC lib/conf/conf.o 00:01:22.871 CC lib/idxd/idxd_user.o 00:01:22.871 CC lib/env_dpdk/init.o 00:01:22.871 CC lib/idxd/idxd.o 00:01:22.871 CC lib/env_dpdk/pci.o 00:01:22.871 CC lib/env_dpdk/threads.o 00:01:22.871 CC lib/env_dpdk/env.o 00:01:22.871 CC lib/env_dpdk/memory.o 00:01:22.871 CC lib/env_dpdk/pci_ioat.o 00:01:22.871 CC lib/env_dpdk/pci_virtio.o 00:01:22.871 CC lib/env_dpdk/pci_idxd.o 00:01:22.871 CC lib/env_dpdk/pci_vmd.o 00:01:22.871 CC lib/env_dpdk/pci_event.o 00:01:22.871 CC lib/env_dpdk/sigbus_handler.o 00:01:22.871 CC lib/env_dpdk/pci_dpdk.o 00:01:22.871 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:22.871 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:22.871 CC lib/rdma/rdma_verbs.o 00:01:22.871 CC lib/rdma/common.o 00:01:22.871 CC lib/json/json_parse.o 00:01:22.871 CC lib/json/json_util.o 00:01:22.871 CC lib/json/json_write.o 00:01:23.128 LIB libspdk_conf.a 00:01:23.128 SO libspdk_conf.so.6.0 00:01:23.128 SYMLINK libspdk_conf.so 00:01:23.128 LIB libspdk_rdma.a 00:01:23.385 SO libspdk_rdma.so.6.0 00:01:23.385 SYMLINK libspdk_rdma.so 00:01:23.385 LIB libspdk_json.a 00:01:23.385 SO libspdk_json.so.6.0 00:01:23.385 SYMLINK libspdk_json.so 00:01:23.385 LIB libspdk_vmd.a 00:01:23.675 SO libspdk_vmd.so.6.0 00:01:23.675 SYMLINK libspdk_vmd.so 00:01:23.675 CC lib/jsonrpc/jsonrpc_server.o 00:01:23.675 CC lib/jsonrpc/jsonrpc_client.o 00:01:23.675 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:23.675 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:23.675 LIB libspdk_idxd.a 00:01:23.675 SO libspdk_idxd.so.12.0 00:01:23.675 SYMLINK libspdk_idxd.so 00:01:23.967 LIB libspdk_jsonrpc.a 00:01:23.967 SO libspdk_jsonrpc.so.6.0 00:01:23.967 SYMLINK libspdk_jsonrpc.so 00:01:23.967 LIB libspdk_env_dpdk.a 00:01:24.224 SO libspdk_env_dpdk.so.14.0 00:01:24.224 SYMLINK libspdk_env_dpdk.so 00:01:24.224 CC lib/rpc/rpc.o 00:01:24.482 LIB libspdk_rpc.a 00:01:24.482 SO libspdk_rpc.so.6.0 00:01:24.482 SYMLINK libspdk_rpc.so 00:01:24.740 CC lib/trace/trace.o 00:01:24.740 CC lib/trace/trace_flags.o 00:01:24.740 CC lib/trace/trace_rpc.o 00:01:24.740 CC lib/keyring/keyring_rpc.o 00:01:24.740 CC lib/keyring/keyring.o 00:01:24.740 CC lib/notify/notify.o 00:01:24.740 CC lib/notify/notify_rpc.o 00:01:24.740 LIB libspdk_keyring.a 00:01:24.740 LIB libspdk_notify.a 00:01:24.997 SO libspdk_keyring.so.1.0 00:01:24.997 SO libspdk_notify.so.6.0 00:01:24.997 SYMLINK libspdk_keyring.so 00:01:24.997 LIB libspdk_trace.a 00:01:24.997 SYMLINK libspdk_notify.so 00:01:24.997 SO libspdk_trace.so.10.0 00:01:24.997 SYMLINK libspdk_trace.so 00:01:25.256 CC lib/sock/sock.o 00:01:25.256 CC lib/sock/sock_rpc.o 00:01:25.256 CC lib/thread/iobuf.o 00:01:25.256 CC lib/thread/thread.o 00:01:25.514 LIB libspdk_sock.a 00:01:25.514 SO libspdk_sock.so.9.0 00:01:25.514 SYMLINK libspdk_sock.so 00:01:25.771 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:25.771 CC lib/nvme/nvme_fabric.o 00:01:25.771 CC lib/nvme/nvme_ctrlr.o 00:01:25.771 CC lib/nvme/nvme_pcie_common.o 00:01:25.771 CC lib/nvme/nvme_ns_cmd.o 00:01:25.771 CC lib/nvme/nvme_ns.o 00:01:25.771 CC lib/nvme/nvme_pcie.o 00:01:25.771 CC lib/nvme/nvme_qpair.o 00:01:25.771 CC lib/nvme/nvme.o 00:01:25.771 CC lib/nvme/nvme_quirks.o 00:01:25.771 CC lib/nvme/nvme_discovery.o 00:01:25.771 CC lib/nvme/nvme_transport.o 00:01:25.771 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:25.771 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:25.771 CC lib/nvme/nvme_tcp.o 00:01:25.771 CC lib/nvme/nvme_opal.o 00:01:25.771 CC lib/nvme/nvme_io_msg.o 00:01:25.771 CC lib/nvme/nvme_poll_group.o 00:01:25.771 CC lib/nvme/nvme_zns.o 00:01:25.771 CC lib/nvme/nvme_stubs.o 00:01:25.771 CC lib/nvme/nvme_cuse.o 00:01:25.771 CC lib/nvme/nvme_auth.o 00:01:25.771 CC lib/nvme/nvme_rdma.o 00:01:26.704 LIB libspdk_thread.a 00:01:26.961 SO libspdk_thread.so.10.0 00:01:26.961 SYMLINK libspdk_thread.so 00:01:27.218 CC lib/blob/blobstore.o 00:01:27.218 CC lib/blob/request.o 00:01:27.218 CC lib/blob/zeroes.o 00:01:27.218 CC lib/blob/blob_bs_dev.o 00:01:27.218 CC lib/init/json_config.o 00:01:27.218 CC lib/accel/accel_rpc.o 00:01:27.218 CC lib/init/subsystem.o 00:01:27.218 CC lib/accel/accel_sw.o 00:01:27.218 CC lib/init/subsystem_rpc.o 00:01:27.218 CC lib/init/rpc.o 00:01:27.218 CC lib/accel/accel.o 00:01:27.218 CC lib/virtio/virtio.o 00:01:27.218 CC lib/virtio/virtio_pci.o 00:01:27.218 CC lib/virtio/virtio_vfio_user.o 00:01:27.218 CC lib/virtio/virtio_vhost_user.o 00:01:27.218 LIB libspdk_init.a 00:01:27.475 SO libspdk_init.so.5.0 00:01:27.475 SYMLINK libspdk_init.so 00:01:27.475 LIB libspdk_virtio.a 00:01:27.475 SO libspdk_virtio.so.7.0 00:01:27.475 SYMLINK libspdk_virtio.so 00:01:27.732 CC lib/event/app.o 00:01:27.732 CC lib/event/app_rpc.o 00:01:27.732 CC lib/event/reactor.o 00:01:27.732 CC lib/event/log_rpc.o 00:01:27.732 CC lib/event/scheduler_static.o 00:01:27.992 LIB libspdk_nvme.a 00:01:27.992 LIB libspdk_accel.a 00:01:27.992 SO libspdk_accel.so.15.0 00:01:27.992 SO libspdk_nvme.so.13.0 00:01:27.992 LIB libspdk_event.a 00:01:27.992 SO libspdk_event.so.13.0 00:01:27.992 SYMLINK libspdk_accel.so 00:01:27.992 SYMLINK libspdk_event.so 00:01:28.251 CC lib/bdev/bdev.o 00:01:28.251 CC lib/bdev/bdev_rpc.o 00:01:28.251 CC lib/bdev/part.o 00:01:28.251 CC lib/bdev/bdev_zone.o 00:01:28.251 CC lib/bdev/scsi_nvme.o 00:01:28.251 SYMLINK libspdk_nvme.so 00:01:29.625 LIB libspdk_blob.a 00:01:29.625 SO libspdk_blob.so.11.0 00:01:29.625 SYMLINK libspdk_blob.so 00:01:29.883 CC lib/lvol/lvol.o 00:01:29.883 CC lib/blobfs/blobfs.o 00:01:29.883 CC lib/blobfs/tree.o 00:01:30.822 LIB libspdk_bdev.a 00:01:30.822 SO libspdk_bdev.so.15.0 00:01:30.822 LIB libspdk_blobfs.a 00:01:30.822 SYMLINK libspdk_bdev.so 00:01:30.822 SO libspdk_blobfs.so.10.0 00:01:31.081 LIB libspdk_lvol.a 00:01:31.081 SYMLINK libspdk_blobfs.so 00:01:31.081 SO libspdk_lvol.so.10.0 00:01:31.081 SYMLINK libspdk_lvol.so 00:01:31.081 CC lib/ublk/ublk.o 00:01:31.081 CC lib/ublk/ublk_rpc.o 00:01:31.081 CC lib/scsi/dev.o 00:01:31.081 CC lib/scsi/lun.o 00:01:31.081 CC lib/scsi/scsi.o 00:01:31.081 CC lib/scsi/port.o 00:01:31.081 CC lib/scsi/scsi_pr.o 00:01:31.081 CC lib/scsi/task.o 00:01:31.081 CC lib/scsi/scsi_bdev.o 00:01:31.081 CC lib/scsi/scsi_rpc.o 00:01:31.081 CC lib/ftl/ftl_core.o 00:01:31.081 CC lib/ftl/ftl_init.o 00:01:31.081 CC lib/ftl/ftl_layout.o 00:01:31.081 CC lib/ftl/ftl_debug.o 00:01:31.081 CC lib/ftl/ftl_io.o 00:01:31.081 CC lib/ftl/ftl_l2p.o 00:01:31.081 CC lib/ftl/ftl_sb.o 00:01:31.081 CC lib/ftl/ftl_nv_cache.o 00:01:31.081 CC lib/ftl/ftl_l2p_flat.o 00:01:31.081 CC lib/ftl/ftl_band_ops.o 00:01:31.081 CC lib/ftl/ftl_writer.o 00:01:31.081 CC lib/ftl/ftl_band.o 00:01:31.081 CC lib/ftl/ftl_reloc.o 00:01:31.081 CC lib/ftl/ftl_rq.o 00:01:31.081 CC lib/ftl/ftl_l2p_cache.o 00:01:31.081 CC lib/ftl/ftl_p2l.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:31.081 CC lib/nbd/nbd.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:31.081 CC lib/nbd/nbd_rpc.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:31.081 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:31.081 CC lib/ftl/utils/ftl_md.o 00:01:31.081 CC lib/ftl/utils/ftl_conf.o 00:01:31.081 CC lib/ftl/utils/ftl_mempool.o 00:01:31.081 CC lib/ftl/utils/ftl_bitmap.o 00:01:31.081 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:31.081 CC lib/ftl/utils/ftl_property.o 00:01:31.081 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:31.081 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:31.081 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:31.081 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:31.081 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:31.081 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:31.081 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:31.081 CC lib/nvmf/ctrlr_discovery.o 00:01:31.081 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:31.081 CC lib/nvmf/ctrlr.o 00:01:31.081 CC lib/nvmf/ctrlr_bdev.o 00:01:31.081 CC lib/nvmf/subsystem.o 00:01:31.081 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:31.081 CC lib/nvmf/nvmf.o 00:01:31.081 CC lib/ftl/ftl_trace.o 00:01:31.081 CC lib/ftl/base/ftl_base_bdev.o 00:01:31.081 CC lib/ftl/base/ftl_base_dev.o 00:01:31.081 CC lib/nvmf/tcp.o 00:01:31.081 CC lib/nvmf/stubs.o 00:01:31.081 CC lib/nvmf/mdns_server.o 00:01:31.081 CC lib/nvmf/transport.o 00:01:31.081 CC lib/nvmf/nvmf_rpc.o 00:01:31.081 CC lib/nvmf/rdma.o 00:01:31.081 CC lib/nvmf/auth.o 00:01:31.649 LIB libspdk_nbd.a 00:01:31.649 SO libspdk_nbd.so.7.0 00:01:31.649 SYMLINK libspdk_nbd.so 00:01:31.907 LIB libspdk_scsi.a 00:01:31.907 SO libspdk_scsi.so.9.0 00:01:31.907 LIB libspdk_ublk.a 00:01:31.907 SO libspdk_ublk.so.3.0 00:01:31.907 SYMLINK libspdk_scsi.so 00:01:32.165 SYMLINK libspdk_ublk.so 00:01:32.165 LIB libspdk_ftl.a 00:01:32.165 CC lib/vhost/vhost.o 00:01:32.165 CC lib/vhost/vhost_rpc.o 00:01:32.165 CC lib/vhost/vhost_scsi.o 00:01:32.165 CC lib/vhost/vhost_blk.o 00:01:32.165 CC lib/vhost/rte_vhost_user.o 00:01:32.165 CC lib/iscsi/conn.o 00:01:32.165 CC lib/iscsi/init_grp.o 00:01:32.165 CC lib/iscsi/iscsi.o 00:01:32.165 CC lib/iscsi/param.o 00:01:32.165 CC lib/iscsi/md5.o 00:01:32.165 CC lib/iscsi/portal_grp.o 00:01:32.165 CC lib/iscsi/iscsi_subsystem.o 00:01:32.165 CC lib/iscsi/tgt_node.o 00:01:32.165 CC lib/iscsi/task.o 00:01:32.165 CC lib/iscsi/iscsi_rpc.o 00:01:32.423 SO libspdk_ftl.so.9.0 00:01:32.683 SYMLINK libspdk_ftl.so 00:01:33.249 LIB libspdk_vhost.a 00:01:33.249 SO libspdk_vhost.so.8.0 00:01:33.509 SYMLINK libspdk_vhost.so 00:01:33.509 LIB libspdk_nvmf.a 00:01:33.768 SO libspdk_nvmf.so.18.0 00:01:33.768 LIB libspdk_iscsi.a 00:01:33.768 SO libspdk_iscsi.so.8.0 00:01:33.768 SYMLINK libspdk_nvmf.so 00:01:34.025 SYMLINK libspdk_iscsi.so 00:01:34.283 CC module/env_dpdk/env_dpdk_rpc.o 00:01:34.283 CC module/accel/iaa/accel_iaa_rpc.o 00:01:34.283 CC module/accel/iaa/accel_iaa.o 00:01:34.283 CC module/scheduler/gscheduler/gscheduler.o 00:01:34.283 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:34.283 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:34.283 CC module/accel/error/accel_error.o 00:01:34.283 CC module/accel/error/accel_error_rpc.o 00:01:34.283 CC module/sock/posix/posix.o 00:01:34.283 CC module/accel/ioat/accel_ioat.o 00:01:34.283 CC module/accel/ioat/accel_ioat_rpc.o 00:01:34.283 CC module/keyring/file/keyring_rpc.o 00:01:34.283 CC module/keyring/file/keyring.o 00:01:34.283 CC module/accel/dsa/accel_dsa.o 00:01:34.283 CC module/accel/dsa/accel_dsa_rpc.o 00:01:34.283 CC module/blob/bdev/blob_bdev.o 00:01:34.283 LIB libspdk_env_dpdk_rpc.a 00:01:34.283 SO libspdk_env_dpdk_rpc.so.6.0 00:01:34.540 SYMLINK libspdk_env_dpdk_rpc.so 00:01:34.540 LIB libspdk_scheduler_gscheduler.a 00:01:34.540 LIB libspdk_scheduler_dpdk_governor.a 00:01:34.540 LIB libspdk_keyring_file.a 00:01:34.540 SO libspdk_scheduler_gscheduler.so.4.0 00:01:34.540 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:34.540 LIB libspdk_accel_error.a 00:01:34.540 SO libspdk_keyring_file.so.1.0 00:01:34.540 LIB libspdk_accel_dsa.a 00:01:34.540 LIB libspdk_scheduler_dynamic.a 00:01:34.540 LIB libspdk_accel_iaa.a 00:01:34.540 LIB libspdk_accel_ioat.a 00:01:34.540 SO libspdk_accel_iaa.so.3.0 00:01:34.540 SYMLINK libspdk_scheduler_gscheduler.so 00:01:34.540 SO libspdk_scheduler_dynamic.so.4.0 00:01:34.540 SO libspdk_accel_error.so.2.0 00:01:34.540 SO libspdk_accel_dsa.so.5.0 00:01:34.540 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:34.540 SO libspdk_accel_ioat.so.6.0 00:01:34.540 SYMLINK libspdk_keyring_file.so 00:01:34.540 SYMLINK libspdk_scheduler_dynamic.so 00:01:34.540 SYMLINK libspdk_accel_iaa.so 00:01:34.540 SYMLINK libspdk_accel_error.so 00:01:34.540 SYMLINK libspdk_accel_dsa.so 00:01:34.540 LIB libspdk_blob_bdev.a 00:01:34.798 SYMLINK libspdk_accel_ioat.so 00:01:34.798 SO libspdk_blob_bdev.so.11.0 00:01:34.798 SYMLINK libspdk_blob_bdev.so 00:01:35.056 CC module/blobfs/bdev/blobfs_bdev.o 00:01:35.056 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:35.056 CC module/bdev/raid/bdev_raid.o 00:01:35.056 CC module/bdev/raid/bdev_raid_rpc.o 00:01:35.056 CC module/bdev/raid/raid0.o 00:01:35.056 CC module/bdev/raid/bdev_raid_sb.o 00:01:35.056 CC module/bdev/raid/concat.o 00:01:35.056 CC module/bdev/raid/raid1.o 00:01:35.056 CC module/bdev/error/vbdev_error.o 00:01:35.056 CC module/bdev/error/vbdev_error_rpc.o 00:01:35.056 CC module/bdev/split/vbdev_split_rpc.o 00:01:35.056 CC module/bdev/nvme/bdev_nvme.o 00:01:35.056 CC module/bdev/aio/bdev_aio.o 00:01:35.056 CC module/bdev/passthru/vbdev_passthru.o 00:01:35.056 CC module/bdev/split/vbdev_split.o 00:01:35.056 CC module/bdev/nvme/vbdev_opal.o 00:01:35.056 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:35.056 CC module/bdev/aio/bdev_aio_rpc.o 00:01:35.056 CC module/bdev/nvme/nvme_rpc.o 00:01:35.056 CC module/bdev/nvme/bdev_mdns_client.o 00:01:35.056 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:35.056 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:35.056 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:35.056 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:35.056 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:35.056 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:35.056 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:35.056 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:35.056 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:35.056 CC module/bdev/malloc/bdev_malloc.o 00:01:35.056 CC module/bdev/gpt/gpt.o 00:01:35.056 CC module/bdev/gpt/vbdev_gpt.o 00:01:35.056 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:35.056 CC module/bdev/delay/vbdev_delay.o 00:01:35.056 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:35.056 CC module/bdev/lvol/vbdev_lvol.o 00:01:35.056 CC module/bdev/iscsi/bdev_iscsi.o 00:01:35.056 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:35.056 CC module/bdev/null/bdev_null.o 00:01:35.056 CC module/bdev/ftl/bdev_ftl.o 00:01:35.056 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:35.056 CC module/bdev/null/bdev_null_rpc.o 00:01:35.315 LIB libspdk_sock_posix.a 00:01:35.315 LIB libspdk_blobfs_bdev.a 00:01:35.315 SO libspdk_blobfs_bdev.so.6.0 00:01:35.315 SO libspdk_sock_posix.so.6.0 00:01:35.315 SYMLINK libspdk_blobfs_bdev.so 00:01:35.315 SYMLINK libspdk_sock_posix.so 00:01:35.315 LIB libspdk_bdev_split.a 00:01:35.315 SO libspdk_bdev_split.so.6.0 00:01:35.315 LIB libspdk_bdev_error.a 00:01:35.315 LIB libspdk_bdev_zone_block.a 00:01:35.315 LIB libspdk_bdev_null.a 00:01:35.315 SO libspdk_bdev_error.so.6.0 00:01:35.315 LIB libspdk_bdev_malloc.a 00:01:35.573 SO libspdk_bdev_zone_block.so.6.0 00:01:35.573 SO libspdk_bdev_null.so.6.0 00:01:35.573 SYMLINK libspdk_bdev_split.so 00:01:35.573 LIB libspdk_bdev_gpt.a 00:01:35.573 SO libspdk_bdev_malloc.so.6.0 00:01:35.573 SYMLINK libspdk_bdev_error.so 00:01:35.573 SO libspdk_bdev_gpt.so.6.0 00:01:35.573 LIB libspdk_bdev_passthru.a 00:01:35.573 LIB libspdk_bdev_ftl.a 00:01:35.573 SYMLINK libspdk_bdev_zone_block.so 00:01:35.573 SYMLINK libspdk_bdev_null.so 00:01:35.573 SO libspdk_bdev_ftl.so.6.0 00:01:35.573 SO libspdk_bdev_passthru.so.6.0 00:01:35.573 LIB libspdk_bdev_aio.a 00:01:35.573 SYMLINK libspdk_bdev_malloc.so 00:01:35.573 SYMLINK libspdk_bdev_gpt.so 00:01:35.573 SO libspdk_bdev_aio.so.6.0 00:01:35.573 LIB libspdk_bdev_delay.a 00:01:35.573 LIB libspdk_bdev_virtio.a 00:01:35.573 LIB libspdk_bdev_iscsi.a 00:01:35.573 SYMLINK libspdk_bdev_ftl.so 00:01:35.573 SYMLINK libspdk_bdev_passthru.so 00:01:35.573 SO libspdk_bdev_delay.so.6.0 00:01:35.573 SO libspdk_bdev_iscsi.so.6.0 00:01:35.573 SYMLINK libspdk_bdev_aio.so 00:01:35.573 SO libspdk_bdev_virtio.so.6.0 00:01:35.573 SYMLINK libspdk_bdev_delay.so 00:01:35.573 SYMLINK libspdk_bdev_iscsi.so 00:01:35.573 LIB libspdk_bdev_lvol.a 00:01:35.573 SYMLINK libspdk_bdev_virtio.so 00:01:35.831 SO libspdk_bdev_lvol.so.6.0 00:01:35.831 SYMLINK libspdk_bdev_lvol.so 00:01:35.831 LIB libspdk_bdev_raid.a 00:01:35.831 SO libspdk_bdev_raid.so.6.0 00:01:36.089 SYMLINK libspdk_bdev_raid.so 00:01:36.659 LIB libspdk_bdev_nvme.a 00:01:36.659 SO libspdk_bdev_nvme.so.7.0 00:01:36.916 SYMLINK libspdk_bdev_nvme.so 00:01:37.482 CC module/event/subsystems/vmd/vmd.o 00:01:37.482 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:37.482 CC module/event/subsystems/sock/sock.o 00:01:37.482 CC module/event/subsystems/scheduler/scheduler.o 00:01:37.482 CC module/event/subsystems/iobuf/iobuf.o 00:01:37.482 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:37.482 CC module/event/subsystems/keyring/keyring.o 00:01:37.482 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:37.482 LIB libspdk_event_scheduler.a 00:01:37.482 LIB libspdk_event_sock.a 00:01:37.482 LIB libspdk_event_vmd.a 00:01:37.482 LIB libspdk_event_keyring.a 00:01:37.482 SO libspdk_event_scheduler.so.4.0 00:01:37.482 LIB libspdk_event_vhost_blk.a 00:01:37.482 SO libspdk_event_sock.so.5.0 00:01:37.482 SO libspdk_event_vmd.so.6.0 00:01:37.482 LIB libspdk_event_iobuf.a 00:01:37.482 SO libspdk_event_keyring.so.1.0 00:01:37.482 SO libspdk_event_vhost_blk.so.3.0 00:01:37.482 SO libspdk_event_iobuf.so.3.0 00:01:37.482 SYMLINK libspdk_event_scheduler.so 00:01:37.482 SYMLINK libspdk_event_sock.so 00:01:37.482 SYMLINK libspdk_event_keyring.so 00:01:37.482 SYMLINK libspdk_event_vhost_blk.so 00:01:37.482 SYMLINK libspdk_event_vmd.so 00:01:37.482 SYMLINK libspdk_event_iobuf.so 00:01:37.741 CC module/event/subsystems/accel/accel.o 00:01:38.064 LIB libspdk_event_accel.a 00:01:38.064 SO libspdk_event_accel.so.6.0 00:01:38.064 SYMLINK libspdk_event_accel.so 00:01:38.324 CC module/event/subsystems/bdev/bdev.o 00:01:38.324 LIB libspdk_event_bdev.a 00:01:38.324 SO libspdk_event_bdev.so.6.0 00:01:38.324 SYMLINK libspdk_event_bdev.so 00:01:38.581 CC module/event/subsystems/ublk/ublk.o 00:01:38.581 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:38.581 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:38.581 CC module/event/subsystems/scsi/scsi.o 00:01:38.581 CC module/event/subsystems/nbd/nbd.o 00:01:38.839 LIB libspdk_event_nbd.a 00:01:38.839 LIB libspdk_event_scsi.a 00:01:38.839 LIB libspdk_event_ublk.a 00:01:38.839 SO libspdk_event_nbd.so.6.0 00:01:38.839 SO libspdk_event_scsi.so.6.0 00:01:38.839 SO libspdk_event_ublk.so.3.0 00:01:38.839 LIB libspdk_event_nvmf.a 00:01:38.839 SYMLINK libspdk_event_scsi.so 00:01:38.839 SYMLINK libspdk_event_nbd.so 00:01:38.839 SO libspdk_event_nvmf.so.6.0 00:01:38.839 SYMLINK libspdk_event_ublk.so 00:01:38.839 SYMLINK libspdk_event_nvmf.so 00:01:39.097 CC module/event/subsystems/iscsi/iscsi.o 00:01:39.097 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:39.097 LIB libspdk_event_iscsi.a 00:01:39.097 SO libspdk_event_iscsi.so.6.0 00:01:39.097 LIB libspdk_event_vhost_scsi.a 00:01:39.354 SYMLINK libspdk_event_iscsi.so 00:01:39.354 SO libspdk_event_vhost_scsi.so.3.0 00:01:39.354 SYMLINK libspdk_event_vhost_scsi.so 00:01:39.354 SO libspdk.so.6.0 00:01:39.354 SYMLINK libspdk.so 00:01:39.613 CC app/spdk_lspci/spdk_lspci.o 00:01:39.613 CC app/spdk_nvme_discover/discovery_aer.o 00:01:39.613 CC app/spdk_nvme_perf/perf.o 00:01:39.613 CC app/spdk_nvme_identify/identify.o 00:01:39.613 CC app/spdk_top/spdk_top.o 00:01:39.613 CXX app/trace/trace.o 00:01:39.613 CC app/trace_record/trace_record.o 00:01:39.613 TEST_HEADER include/spdk/accel.h 00:01:39.613 TEST_HEADER include/spdk/barrier.h 00:01:39.613 CC app/spdk_dd/spdk_dd.o 00:01:39.613 TEST_HEADER include/spdk/base64.h 00:01:39.613 TEST_HEADER include/spdk/bdev.h 00:01:39.613 TEST_HEADER include/spdk/accel_module.h 00:01:39.613 TEST_HEADER include/spdk/bdev_module.h 00:01:39.613 CC test/rpc_client/rpc_client_test.o 00:01:39.613 TEST_HEADER include/spdk/bdev_zone.h 00:01:39.613 TEST_HEADER include/spdk/bit_array.h 00:01:39.613 TEST_HEADER include/spdk/bit_pool.h 00:01:39.613 TEST_HEADER include/spdk/blob_bdev.h 00:01:39.613 TEST_HEADER include/spdk/assert.h 00:01:39.613 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:39.613 TEST_HEADER include/spdk/blob.h 00:01:39.613 TEST_HEADER include/spdk/blobfs.h 00:01:39.613 TEST_HEADER include/spdk/conf.h 00:01:39.613 TEST_HEADER include/spdk/config.h 00:01:39.878 TEST_HEADER include/spdk/crc16.h 00:01:39.878 TEST_HEADER include/spdk/cpuset.h 00:01:39.878 TEST_HEADER include/spdk/crc64.h 00:01:39.878 TEST_HEADER include/spdk/dma.h 00:01:39.878 TEST_HEADER include/spdk/dif.h 00:01:39.878 CC app/iscsi_tgt/iscsi_tgt.o 00:01:39.878 TEST_HEADER include/spdk/crc32.h 00:01:39.878 TEST_HEADER include/spdk/event.h 00:01:39.878 TEST_HEADER include/spdk/endian.h 00:01:39.878 TEST_HEADER include/spdk/env.h 00:01:39.878 CC app/nvmf_tgt/nvmf_main.o 00:01:39.878 TEST_HEADER include/spdk/env_dpdk.h 00:01:39.878 TEST_HEADER include/spdk/fd_group.h 00:01:39.878 CC app/spdk_tgt/spdk_tgt.o 00:01:39.878 TEST_HEADER include/spdk/file.h 00:01:39.878 TEST_HEADER include/spdk/ftl.h 00:01:39.878 TEST_HEADER include/spdk/fd.h 00:01:39.878 TEST_HEADER include/spdk/gpt_spec.h 00:01:39.878 TEST_HEADER include/spdk/hexlify.h 00:01:39.878 TEST_HEADER include/spdk/idxd.h 00:01:39.878 TEST_HEADER include/spdk/idxd_spec.h 00:01:39.878 TEST_HEADER include/spdk/histogram_data.h 00:01:39.878 TEST_HEADER include/spdk/init.h 00:01:39.878 TEST_HEADER include/spdk/ioat_spec.h 00:01:39.878 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:39.878 TEST_HEADER include/spdk/ioat.h 00:01:39.878 TEST_HEADER include/spdk/iscsi_spec.h 00:01:39.878 TEST_HEADER include/spdk/json.h 00:01:39.878 CC app/vhost/vhost.o 00:01:39.878 TEST_HEADER include/spdk/jsonrpc.h 00:01:39.878 TEST_HEADER include/spdk/keyring.h 00:01:39.878 TEST_HEADER include/spdk/keyring_module.h 00:01:39.878 TEST_HEADER include/spdk/log.h 00:01:39.878 TEST_HEADER include/spdk/memory.h 00:01:39.878 TEST_HEADER include/spdk/mmio.h 00:01:39.878 TEST_HEADER include/spdk/likely.h 00:01:39.878 TEST_HEADER include/spdk/lvol.h 00:01:39.878 TEST_HEADER include/spdk/notify.h 00:01:39.878 TEST_HEADER include/spdk/nbd.h 00:01:39.878 TEST_HEADER include/spdk/nvme.h 00:01:39.878 TEST_HEADER include/spdk/nvme_intel.h 00:01:39.878 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:39.878 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:39.878 TEST_HEADER include/spdk/nvme_spec.h 00:01:39.878 TEST_HEADER include/spdk/nvme_zns.h 00:01:39.878 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:39.878 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:39.878 TEST_HEADER include/spdk/nvmf.h 00:01:39.878 TEST_HEADER include/spdk/nvmf_transport.h 00:01:39.878 TEST_HEADER include/spdk/nvmf_spec.h 00:01:39.878 TEST_HEADER include/spdk/opal.h 00:01:39.878 TEST_HEADER include/spdk/opal_spec.h 00:01:39.878 TEST_HEADER include/spdk/pipe.h 00:01:39.878 TEST_HEADER include/spdk/pci_ids.h 00:01:39.878 TEST_HEADER include/spdk/queue.h 00:01:39.878 TEST_HEADER include/spdk/reduce.h 00:01:39.878 TEST_HEADER include/spdk/rpc.h 00:01:39.878 TEST_HEADER include/spdk/scsi.h 00:01:39.878 TEST_HEADER include/spdk/scheduler.h 00:01:39.878 TEST_HEADER include/spdk/scsi_spec.h 00:01:39.878 TEST_HEADER include/spdk/sock.h 00:01:39.878 TEST_HEADER include/spdk/stdinc.h 00:01:39.878 TEST_HEADER include/spdk/string.h 00:01:39.878 TEST_HEADER include/spdk/trace.h 00:01:39.878 TEST_HEADER include/spdk/thread.h 00:01:39.878 TEST_HEADER include/spdk/tree.h 00:01:39.878 TEST_HEADER include/spdk/trace_parser.h 00:01:39.878 TEST_HEADER include/spdk/ublk.h 00:01:39.878 TEST_HEADER include/spdk/util.h 00:01:39.878 TEST_HEADER include/spdk/version.h 00:01:39.878 TEST_HEADER include/spdk/uuid.h 00:01:39.878 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:39.878 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:39.878 TEST_HEADER include/spdk/vhost.h 00:01:39.878 TEST_HEADER include/spdk/vmd.h 00:01:39.878 TEST_HEADER include/spdk/xor.h 00:01:39.878 TEST_HEADER include/spdk/zipf.h 00:01:39.878 CXX test/cpp_headers/accel.o 00:01:39.878 CXX test/cpp_headers/assert.o 00:01:39.878 CXX test/cpp_headers/accel_module.o 00:01:39.878 CXX test/cpp_headers/barrier.o 00:01:39.878 CXX test/cpp_headers/bdev.o 00:01:39.878 CXX test/cpp_headers/base64.o 00:01:39.878 CXX test/cpp_headers/bdev_module.o 00:01:39.878 CXX test/cpp_headers/bdev_zone.o 00:01:39.878 CXX test/cpp_headers/bit_array.o 00:01:39.878 CXX test/cpp_headers/blob_bdev.o 00:01:39.878 CXX test/cpp_headers/blobfs_bdev.o 00:01:39.878 CXX test/cpp_headers/blobfs.o 00:01:39.878 CXX test/cpp_headers/bit_pool.o 00:01:39.878 CXX test/cpp_headers/blob.o 00:01:39.878 CXX test/cpp_headers/config.o 00:01:39.878 CXX test/cpp_headers/conf.o 00:01:39.878 CXX test/cpp_headers/cpuset.o 00:01:39.878 CXX test/cpp_headers/crc32.o 00:01:39.878 CXX test/cpp_headers/crc16.o 00:01:39.878 CXX test/cpp_headers/crc64.o 00:01:39.878 CXX test/cpp_headers/dif.o 00:01:39.878 CXX test/cpp_headers/env_dpdk.o 00:01:39.878 CXX test/cpp_headers/endian.o 00:01:39.878 CXX test/cpp_headers/fd_group.o 00:01:39.878 CXX test/cpp_headers/dma.o 00:01:39.878 CXX test/cpp_headers/fd.o 00:01:39.878 CXX test/cpp_headers/event.o 00:01:39.878 CXX test/cpp_headers/file.o 00:01:39.878 CXX test/cpp_headers/ftl.o 00:01:39.878 CXX test/cpp_headers/gpt_spec.o 00:01:39.878 CXX test/cpp_headers/env.o 00:01:39.878 CXX test/cpp_headers/histogram_data.o 00:01:39.878 CXX test/cpp_headers/hexlify.o 00:01:39.878 CXX test/cpp_headers/ioat.o 00:01:39.878 CXX test/cpp_headers/idxd.o 00:01:39.878 CXX test/cpp_headers/ioat_spec.o 00:01:39.878 CXX test/cpp_headers/idxd_spec.o 00:01:39.878 CXX test/cpp_headers/init.o 00:01:39.878 CXX test/cpp_headers/json.o 00:01:39.878 CXX test/cpp_headers/keyring.o 00:01:39.878 CXX test/cpp_headers/iscsi_spec.o 00:01:39.878 CXX test/cpp_headers/jsonrpc.o 00:01:39.878 CXX test/cpp_headers/log.o 00:01:39.878 CXX test/cpp_headers/keyring_module.o 00:01:39.878 CXX test/cpp_headers/likely.o 00:01:39.878 CXX test/cpp_headers/lvol.o 00:01:39.878 CXX test/cpp_headers/memory.o 00:01:39.878 CXX test/cpp_headers/nbd.o 00:01:39.878 CXX test/cpp_headers/mmio.o 00:01:39.878 CXX test/cpp_headers/notify.o 00:01:39.878 CXX test/cpp_headers/nvme.o 00:01:39.878 CXX test/cpp_headers/nvme_ocssd.o 00:01:39.878 CXX test/cpp_headers/nvme_intel.o 00:01:40.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:40.146 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:40.146 CC test/app/histogram_perf/histogram_perf.o 00:01:40.146 CC test/app/jsoncat/jsoncat.o 00:01:40.146 CC test/event/reactor/reactor.o 00:01:40.146 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:40.146 CC test/event/reactor_perf/reactor_perf.o 00:01:40.146 CC examples/ioat/perf/perf.o 00:01:40.146 CC test/app/bdev_svc/bdev_svc.o 00:01:40.146 CC app/fio/nvme/fio_plugin.o 00:01:40.146 CC examples/nvme/arbitration/arbitration.o 00:01:40.146 LINK spdk_lspci 00:01:40.146 CXX test/cpp_headers/nvme_spec.o 00:01:40.146 CC examples/vmd/lsvmd/lsvmd.o 00:01:40.146 CC examples/util/zipf/zipf.o 00:01:40.146 CC examples/nvme/abort/abort.o 00:01:40.146 CC examples/nvme/hello_world/hello_world.o 00:01:40.146 CC examples/thread/thread/thread_ex.o 00:01:40.146 CC test/nvme/startup/startup.o 00:01:40.146 CC test/app/stub/stub.o 00:01:40.146 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:40.146 CC test/event/app_repeat/app_repeat.o 00:01:40.146 CC examples/vmd/led/led.o 00:01:40.146 CC test/nvme/fused_ordering/fused_ordering.o 00:01:40.146 CC examples/blob/hello_world/hello_blob.o 00:01:40.146 CC test/env/pci/pci_ut.o 00:01:40.146 CC examples/ioat/verify/verify.o 00:01:40.146 CC test/nvme/reset/reset.o 00:01:40.146 CC test/thread/poller_perf/poller_perf.o 00:01:40.146 CC test/nvme/reserve/reserve.o 00:01:40.146 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:40.146 CC test/event/event_perf/event_perf.o 00:01:40.146 CC examples/nvme/reconnect/reconnect.o 00:01:40.146 CC examples/blob/cli/blobcli.o 00:01:40.146 CC test/nvme/overhead/overhead.o 00:01:40.146 CC examples/accel/perf/accel_perf.o 00:01:40.146 CC test/env/vtophys/vtophys.o 00:01:40.146 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:40.146 CC test/blobfs/mkfs/mkfs.o 00:01:40.146 CC test/nvme/connect_stress/connect_stress.o 00:01:40.146 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.146 CC examples/idxd/perf/perf.o 00:01:40.146 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.146 CC test/nvme/compliance/nvme_compliance.o 00:01:40.146 CC test/event/scheduler/scheduler.o 00:01:40.146 CC test/nvme/aer/aer.o 00:01:40.146 CC test/env/memory/memory_ut.o 00:01:40.146 CC test/nvme/cuse/cuse.o 00:01:40.146 CC test/nvme/boot_partition/boot_partition.o 00:01:40.146 CC examples/sock/hello_world/hello_sock.o 00:01:40.146 CC test/nvme/sgl/sgl.o 00:01:40.147 CC examples/nvme/hotplug/hotplug.o 00:01:40.147 CC test/nvme/e2edp/nvme_dp.o 00:01:40.147 CC app/fio/bdev/fio_plugin.o 00:01:40.414 CC test/nvme/fdp/fdp.o 00:01:40.414 CC test/nvme/err_injection/err_injection.o 00:01:40.414 CC test/accel/dif/dif.o 00:01:40.414 CC test/nvme/simple_copy/simple_copy.o 00:01:40.414 CC test/dma/test_dma/test_dma.o 00:01:40.414 CC examples/nvmf/nvmf/nvmf.o 00:01:40.414 LINK spdk_nvme_discover 00:01:40.414 LINK iscsi_tgt 00:01:40.414 CC test/bdev/bdevio/bdevio.o 00:01:40.414 LINK rpc_client_test 00:01:40.414 LINK interrupt_tgt 00:01:40.414 LINK spdk_trace_record 00:01:40.414 LINK spdk_tgt 00:01:40.683 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:40.683 LINK jsoncat 00:01:40.683 LINK reactor 00:01:40.683 LINK nvmf_tgt 00:01:40.683 LINK zipf 00:01:40.683 LINK lsvmd 00:01:40.683 CC test/env/mem_callbacks/mem_callbacks.o 00:01:40.683 LINK env_dpdk_post_init 00:01:40.683 LINK vhost 00:01:40.683 CC test/lvol/esnap/esnap.o 00:01:40.944 LINK app_repeat 00:01:40.944 CXX test/cpp_headers/nvme_zns.o 00:01:40.944 CXX test/cpp_headers/nvmf_cmd.o 00:01:40.944 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:40.944 LINK spdk_dd 00:01:40.944 CXX test/cpp_headers/nvmf.o 00:01:40.944 CXX test/cpp_headers/nvmf_spec.o 00:01:40.944 CXX test/cpp_headers/nvmf_transport.o 00:01:40.944 LINK stub 00:01:40.944 CXX test/cpp_headers/opal.o 00:01:40.944 LINK hello_world 00:01:40.944 CXX test/cpp_headers/pci_ids.o 00:01:40.944 CXX test/cpp_headers/opal_spec.o 00:01:40.944 CXX test/cpp_headers/queue.o 00:01:40.944 CXX test/cpp_headers/pipe.o 00:01:40.944 LINK ioat_perf 00:01:40.944 CXX test/cpp_headers/reduce.o 00:01:40.944 CXX test/cpp_headers/rpc.o 00:01:40.944 CXX test/cpp_headers/scheduler.o 00:01:40.944 CXX test/cpp_headers/scsi.o 00:01:40.944 CXX test/cpp_headers/scsi_spec.o 00:01:40.944 CXX test/cpp_headers/sock.o 00:01:40.944 LINK hello_bdev 00:01:40.944 CXX test/cpp_headers/stdinc.o 00:01:40.944 CXX test/cpp_headers/string.o 00:01:40.944 CXX test/cpp_headers/thread.o 00:01:40.944 CXX test/cpp_headers/trace.o 00:01:40.944 LINK reserve 00:01:40.944 CXX test/cpp_headers/trace_parser.o 00:01:40.944 LINK doorbell_aers 00:01:40.944 LINK hello_blob 00:01:40.944 LINK reset 00:01:40.944 CXX test/cpp_headers/tree.o 00:01:40.944 CXX test/cpp_headers/ublk.o 00:01:40.944 LINK scheduler 00:01:40.944 CXX test/cpp_headers/util.o 00:01:40.944 LINK thread 00:01:40.944 LINK vtophys 00:01:40.944 LINK hotplug 00:01:40.944 CXX test/cpp_headers/uuid.o 00:01:40.944 CXX test/cpp_headers/version.o 00:01:40.944 CXX test/cpp_headers/vfio_user_pci.o 00:01:40.944 CXX test/cpp_headers/vhost.o 00:01:40.944 LINK hello_sock 00:01:40.944 CXX test/cpp_headers/vfio_user_spec.o 00:01:40.944 CXX test/cpp_headers/vmd.o 00:01:40.944 CXX test/cpp_headers/xor.o 00:01:40.944 LINK sgl 00:01:41.205 CXX test/cpp_headers/zipf.o 00:01:41.205 LINK cmb_copy 00:01:41.205 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:41.205 LINK overhead 00:01:41.205 LINK histogram_perf 00:01:41.205 LINK bdev_svc 00:01:41.205 LINK led 00:01:41.205 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:41.205 LINK poller_perf 00:01:41.205 LINK reactor_perf 00:01:41.205 LINK spdk_trace 00:01:41.205 LINK err_injection 00:01:41.205 LINK nvme_compliance 00:01:41.205 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:41.205 LINK startup 00:01:41.205 LINK verify 00:01:41.205 LINK pmr_persistence 00:01:41.205 LINK event_perf 00:01:41.205 LINK abort 00:01:41.205 LINK mkfs 00:01:41.463 LINK pci_ut 00:01:41.463 LINK connect_stress 00:01:41.463 LINK fused_ordering 00:01:41.463 LINK boot_partition 00:01:41.463 LINK nvme_manage 00:01:41.463 LINK spdk_bdev 00:01:41.463 LINK test_dma 00:01:41.463 LINK nvme_fuzz 00:01:41.464 LINK aer 00:01:41.464 LINK nvme_dp 00:01:41.464 LINK simple_copy 00:01:41.464 LINK arbitration 00:01:41.721 LINK accel_perf 00:01:41.721 LINK reconnect 00:01:41.721 LINK idxd_perf 00:01:41.721 LINK nvmf 00:01:41.721 LINK fdp 00:01:41.721 LINK mem_callbacks 00:01:41.721 LINK bdevio 00:01:41.721 LINK dif 00:01:41.721 LINK blobcli 00:01:41.721 LINK spdk_top 00:01:41.721 LINK vhost_fuzz 00:01:41.979 LINK spdk_nvme 00:01:41.979 LINK cuse 00:01:41.979 LINK spdk_nvme_perf 00:01:41.979 LINK spdk_nvme_identify 00:01:41.979 LINK bdevperf 00:01:41.979 LINK memory_ut 00:01:42.926 LINK iscsi_fuzz 00:01:44.820 LINK esnap 00:01:44.820 00:01:44.820 real 0m38.258s 00:01:44.820 user 6m2.303s 00:01:44.820 sys 5m28.159s 00:01:44.820 10:21:00 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:44.820 10:21:00 make -- common/autotest_common.sh@10 -- $ set +x 00:01:44.820 ************************************ 00:01:44.820 END TEST make 00:01:44.820 ************************************ 00:01:44.820 10:21:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.820 10:21:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:44.820 10:21:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:44.820 10:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.820 10:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.820 10:21:00 -- pm/common@44 -- $ pid=2353018 00:01:44.820 10:21:00 -- pm/common@50 -- $ kill -TERM 2353018 00:01:44.820 10:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.820 10:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.820 10:21:00 -- pm/common@44 -- $ pid=2353020 00:01:44.820 10:21:00 -- pm/common@50 -- $ kill -TERM 2353020 00:01:44.820 10:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.820 10:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.820 10:21:00 -- pm/common@44 -- $ pid=2353021 00:01:44.820 10:21:00 -- pm/common@50 -- $ kill -TERM 2353021 00:01:44.820 10:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.820 10:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.820 10:21:00 -- pm/common@44 -- $ pid=2353055 00:01:44.820 10:21:00 -- pm/common@50 -- $ sudo -E kill -TERM 2353055 00:01:44.820 10:21:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.820 10:21:00 -- nvmf/common.sh@7 -- # uname -s 00:01:44.820 10:21:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.820 10:21:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.820 10:21:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.820 10:21:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.820 10:21:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.820 10:21:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.820 10:21:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.820 10:21:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.820 10:21:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.820 10:21:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.820 10:21:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:01:44.820 10:21:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:01:44.820 10:21:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.820 10:21:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.820 10:21:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:01:44.820 10:21:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.820 10:21:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:44.820 10:21:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.820 10:21:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.820 10:21:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.820 10:21:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.820 10:21:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.820 10:21:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.820 10:21:00 -- paths/export.sh@5 -- # export PATH 00:01:44.821 10:21:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.821 10:21:00 -- nvmf/common.sh@47 -- # : 0 00:01:44.821 10:21:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.821 10:21:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.821 10:21:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.821 10:21:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.821 10:21:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.821 10:21:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.821 10:21:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.821 10:21:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.821 10:21:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.821 10:21:00 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.821 10:21:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.821 10:21:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.821 10:21:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:44.821 10:21:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.821 10:21:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:44.821 10:21:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.821 10:21:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.821 10:21:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.821 10:21:00 -- spdk/autotest.sh@48 -- # udevadm_pid=2411875 00:01:44.821 10:21:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.821 10:21:00 -- pm/common@17 -- # local monitor 00:01:44.821 10:21:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.821 10:21:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.821 10:21:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.821 10:21:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.821 10:21:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.821 10:21:00 -- pm/common@25 -- # sleep 1 00:01:44.821 10:21:00 -- pm/common@21 -- # date +%s 00:01:44.821 10:21:00 -- pm/common@21 -- # date +%s 00:01:44.821 10:21:00 -- pm/common@21 -- # date +%s 00:01:44.821 10:21:00 -- pm/common@21 -- # date +%s 00:01:44.821 10:21:00 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715761260 00:01:44.821 10:21:00 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715761260 00:01:44.821 10:21:00 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715761260 00:01:44.821 10:21:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715761260 00:01:44.821 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715761260_collect-vmstat.pm.log 00:01:44.821 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715761260_collect-cpu-load.pm.log 00:01:44.821 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715761260_collect-cpu-temp.pm.log 00:01:44.821 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715761260_collect-bmc-pm.bmc.pm.log 00:01:46.194 10:21:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:46.194 10:21:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:46.194 10:21:01 -- common/autotest_common.sh@721 -- # xtrace_disable 00:01:46.194 10:21:01 -- common/autotest_common.sh@10 -- # set +x 00:01:46.194 10:21:01 -- spdk/autotest.sh@59 -- # create_test_list 00:01:46.194 10:21:01 -- common/autotest_common.sh@745 -- # xtrace_disable 00:01:46.194 10:21:01 -- common/autotest_common.sh@10 -- # set +x 00:01:46.194 10:21:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:01:46.194 10:21:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:46.194 10:21:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:46.194 10:21:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:46.194 10:21:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:46.194 10:21:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:46.194 10:21:01 -- common/autotest_common.sh@1452 -- # uname 00:01:46.194 10:21:01 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:01:46.194 10:21:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:46.195 10:21:01 -- common/autotest_common.sh@1472 -- # uname 00:01:46.195 10:21:01 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:01:46.195 10:21:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:46.195 10:21:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:46.195 10:21:01 -- spdk/autotest.sh@72 -- # hash lcov 00:01:46.195 10:21:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:46.195 10:21:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:46.195 --rc lcov_branch_coverage=1 00:01:46.195 --rc lcov_function_coverage=1 00:01:46.195 --rc genhtml_branch_coverage=1 00:01:46.195 --rc genhtml_function_coverage=1 00:01:46.195 --rc genhtml_legend=1 00:01:46.195 --rc geninfo_all_blocks=1 00:01:46.195 ' 00:01:46.195 10:21:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:46.195 --rc lcov_branch_coverage=1 00:01:46.195 --rc lcov_function_coverage=1 00:01:46.195 --rc genhtml_branch_coverage=1 00:01:46.195 --rc genhtml_function_coverage=1 00:01:46.195 --rc genhtml_legend=1 00:01:46.195 --rc geninfo_all_blocks=1 00:01:46.195 ' 00:01:46.195 10:21:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:46.195 --rc lcov_branch_coverage=1 00:01:46.195 --rc lcov_function_coverage=1 00:01:46.195 --rc genhtml_branch_coverage=1 00:01:46.195 --rc genhtml_function_coverage=1 00:01:46.195 --rc genhtml_legend=1 00:01:46.195 --rc geninfo_all_blocks=1 00:01:46.195 --no-external' 00:01:46.195 10:21:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:46.195 --rc lcov_branch_coverage=1 00:01:46.195 --rc lcov_function_coverage=1 00:01:46.195 --rc genhtml_branch_coverage=1 00:01:46.195 --rc genhtml_function_coverage=1 00:01:46.195 --rc genhtml_legend=1 00:01:46.195 --rc geninfo_all_blocks=1 00:01:46.195 --no-external' 00:01:46.195 10:21:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:46.195 lcov: LCOV version 1.14 00:01:46.195 10:21:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:01:52.753 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:52.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:52.753 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:52.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:52.753 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:52.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:52.753 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:52.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:00.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:00.868 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:00.869 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:00.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:00.870 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:00.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:02.781 10:21:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:02.781 10:21:18 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:02.781 10:21:18 -- common/autotest_common.sh@10 -- # set +x 00:02:02.781 10:21:18 -- spdk/autotest.sh@91 -- # rm -f 00:02:02.781 10:21:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:05.323 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:02:05.323 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:05.323 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:05.323 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:02:05.584 10:21:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:05.584 10:21:21 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:05.584 10:21:21 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:05.584 10:21:21 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:05.584 10:21:21 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:05.584 10:21:21 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:05.584 10:21:21 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:05.584 10:21:21 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:05.584 10:21:21 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:05.584 10:21:21 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:05.584 10:21:21 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:02:05.584 10:21:21 -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:02:05.584 10:21:21 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:05.584 10:21:21 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:05.584 10:21:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:05.584 10:21:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:05.584 10:21:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:05.584 10:21:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:05.584 10:21:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:05.584 10:21:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:05.584 No valid GPT data, bailing 00:02:05.584 10:21:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:05.584 10:21:21 -- scripts/common.sh@391 -- # pt= 00:02:05.584 10:21:21 -- scripts/common.sh@392 -- # return 1 00:02:05.584 10:21:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:05.584 1+0 records in 00:02:05.584 1+0 records out 00:02:05.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0031265 s, 335 MB/s 00:02:05.584 10:21:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:05.584 10:21:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:05.584 10:21:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:05.584 10:21:21 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:05.584 10:21:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:05.584 No valid GPT data, bailing 00:02:05.584 10:21:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:05.584 10:21:21 -- scripts/common.sh@391 -- # pt= 00:02:05.584 10:21:21 -- scripts/common.sh@392 -- # return 1 00:02:05.584 10:21:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:05.584 1+0 records in 00:02:05.584 1+0 records out 00:02:05.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00260365 s, 403 MB/s 00:02:05.584 10:21:21 -- spdk/autotest.sh@118 -- # sync 00:02:05.584 10:21:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:05.584 10:21:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:05.584 10:21:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:10.943 10:21:26 -- spdk/autotest.sh@124 -- # uname -s 00:02:10.943 10:21:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:10.943 10:21:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:10.943 10:21:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:10.943 10:21:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:10.943 10:21:26 -- common/autotest_common.sh@10 -- # set +x 00:02:10.943 ************************************ 00:02:10.943 START TEST setup.sh 00:02:10.943 ************************************ 00:02:10.943 10:21:26 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:10.943 * Looking for test storage... 00:02:10.943 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:10.943 10:21:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:10.943 10:21:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:10.943 10:21:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:10.943 10:21:26 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:10.943 10:21:26 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:10.943 10:21:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:10.943 ************************************ 00:02:10.943 START TEST acl 00:02:10.943 ************************************ 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:10.943 * Looking for test storage... 00:02:10.943 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:10.943 10:21:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:10.943 10:21:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:10.943 10:21:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:10.943 10:21:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:10.943 10:21:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:10.943 10:21:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:10.943 10:21:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:10.943 10:21:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:10.943 10:21:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:14.241 10:21:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:14.241 10:21:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:14.241 10:21:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:14.241 10:21:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:14.241 10:21:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:14.241 10:21:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:17.547 Hugepages 00:02:17.547 node hugesize free / total 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 00:02:17.547 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:03:00.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:02:17.547 10:21:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:17.547 10:21:32 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:17.547 10:21:32 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:17.547 10:21:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:17.547 ************************************ 00:02:17.547 START TEST denied 00:02:17.547 ************************************ 00:02:17.547 10:21:32 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:17.547 10:21:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:03:00.0' 00:02:17.547 10:21:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:17.547 10:21:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:03:00.0' 00:02:17.547 10:21:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:17.547 10:21:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:21.754 0000:03:00.0 (1344 51c3): Skipping denied controller at 0000:03:00.0 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:03:00.0 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:03:00.0 ]] 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:03:00.0/driver 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:21.754 10:21:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.958 00:02:25.958 real 0m8.250s 00:02:25.958 user 0m2.010s 00:02:25.958 sys 0m4.113s 00:02:25.958 10:21:41 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:25.958 10:21:41 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:25.958 ************************************ 00:02:25.958 END TEST denied 00:02:25.958 ************************************ 00:02:25.958 10:21:41 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:25.958 10:21:41 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:25.958 10:21:41 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:25.958 10:21:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:25.958 ************************************ 00:02:25.958 START TEST allowed 00:02:25.958 ************************************ 00:02:25.958 10:21:41 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:02:25.958 10:21:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:03:00.0 00:02:25.958 10:21:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:25.958 10:21:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.958 10:21:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:03:00.0 .*: nvme -> .*' 00:02:25.958 10:21:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:29.257 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:c9:00.0 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:29.257 10:21:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:29.258 10:21:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.556 00:02:32.556 real 0m7.038s 00:02:32.556 user 0m2.004s 00:02:32.556 sys 0m3.912s 00:02:32.557 10:21:48 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:32.557 10:21:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:32.557 ************************************ 00:02:32.557 END TEST allowed 00:02:32.557 ************************************ 00:02:32.557 00:02:32.557 real 0m21.846s 00:02:32.557 user 0m6.183s 00:02:32.557 sys 0m12.276s 00:02:32.557 10:21:48 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:32.557 10:21:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:32.557 ************************************ 00:02:32.557 END TEST acl 00:02:32.557 ************************************ 00:02:32.557 10:21:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.557 10:21:48 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:32.557 10:21:48 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:32.557 10:21:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:32.557 ************************************ 00:02:32.557 START TEST hugepages 00:02:32.557 ************************************ 00:02:32.557 10:21:48 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.819 * Looking for test storage... 00:02:32.819 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.819 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 107729144 kB' 'MemAvailable: 112018244 kB' 'Buffers: 2696 kB' 'Cached: 10611664 kB' 'SwapCached: 0 kB' 'Active: 6724052 kB' 'Inactive: 4395652 kB' 'Active(anon): 6154240 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515076 kB' 'Mapped: 167540 kB' 'Shmem: 5648896 kB' 'KReclaimable: 296536 kB' 'Slab: 927036 kB' 'SReclaimable: 296536 kB' 'SUnreclaim: 630500 kB' 'KernelStack: 24784 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69510436 kB' 'Committed_AS: 7677224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228560 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.820 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:32.821 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:32.822 10:21:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:32.822 10:21:48 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:32.822 10:21:48 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:32.822 10:21:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:32.822 ************************************ 00:02:32.822 START TEST default_setup 00:02:32.822 ************************************ 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.822 10:21:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:36.122 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.122 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.122 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.383 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:02:36.642 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109966696 kB' 'MemAvailable: 114255364 kB' 'Buffers: 2696 kB' 'Cached: 10611920 kB' 'SwapCached: 0 kB' 'Active: 6751628 kB' 'Inactive: 4395652 kB' 'Active(anon): 6181816 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542068 kB' 'Mapped: 167920 kB' 'Shmem: 5649152 kB' 'KReclaimable: 295672 kB' 'Slab: 919576 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623904 kB' 'KernelStack: 24784 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7738580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228496 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109970108 kB' 'MemAvailable: 114258776 kB' 'Buffers: 2696 kB' 'Cached: 10611924 kB' 'SwapCached: 0 kB' 'Active: 6751988 kB' 'Inactive: 4395652 kB' 'Active(anon): 6182176 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542344 kB' 'Mapped: 167928 kB' 'Shmem: 5649156 kB' 'KReclaimable: 295672 kB' 'Slab: 919568 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623896 kB' 'KernelStack: 24784 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7738600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228448 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.910 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.911 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109969332 kB' 'MemAvailable: 114258000 kB' 'Buffers: 2696 kB' 'Cached: 10611940 kB' 'SwapCached: 0 kB' 'Active: 6751508 kB' 'Inactive: 4395652 kB' 'Active(anon): 6181696 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541304 kB' 'Mapped: 167928 kB' 'Shmem: 5649172 kB' 'KReclaimable: 295672 kB' 'Slab: 919568 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623896 kB' 'KernelStack: 24688 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7738620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228448 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.912 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.913 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.914 nr_hugepages=1024 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.914 resv_hugepages=0 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.914 surplus_hugepages=0 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.914 anon_hugepages=0 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109969524 kB' 'MemAvailable: 114258192 kB' 'Buffers: 2696 kB' 'Cached: 10611944 kB' 'SwapCached: 0 kB' 'Active: 6750692 kB' 'Inactive: 4395652 kB' 'Active(anon): 6180880 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540956 kB' 'Mapped: 167920 kB' 'Shmem: 5649176 kB' 'KReclaimable: 295672 kB' 'Slab: 919720 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 624048 kB' 'KernelStack: 24720 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7738644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228464 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.914 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.915 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60206588 kB' 'MemUsed: 5549392 kB' 'SwapCached: 0 kB' 'Active: 1748536 kB' 'Inactive: 226004 kB' 'Active(anon): 1659836 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1671752 kB' 'Mapped: 53376 kB' 'AnonPages: 311964 kB' 'Shmem: 1357048 kB' 'KernelStack: 11848 kB' 'PageTables: 5800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 464192 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 325356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.916 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.917 node0=1024 expecting 1024 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.917 00:02:36.917 real 0m4.228s 00:02:36.917 user 0m1.013s 00:02:36.917 sys 0m1.952s 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:36.917 10:21:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:36.917 ************************************ 00:02:36.917 END TEST default_setup 00:02:36.917 ************************************ 00:02:37.205 10:21:52 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:37.205 10:21:52 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:37.205 10:21:52 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:37.205 10:21:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.205 ************************************ 00:02:37.205 START TEST per_node_1G_alloc 00:02:37.205 ************************************ 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.205 10:21:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:39.766 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:39.766 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:39.766 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:39.766 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109967776 kB' 'MemAvailable: 114256444 kB' 'Buffers: 2696 kB' 'Cached: 10612072 kB' 'SwapCached: 0 kB' 'Active: 6752136 kB' 'Inactive: 4395652 kB' 'Active(anon): 6182324 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541840 kB' 'Mapped: 168032 kB' 'Shmem: 5649304 kB' 'KReclaimable: 295672 kB' 'Slab: 919604 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623932 kB' 'KernelStack: 24608 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7738060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228352 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.766 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.767 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109968844 kB' 'MemAvailable: 114257512 kB' 'Buffers: 2696 kB' 'Cached: 10612076 kB' 'SwapCached: 0 kB' 'Active: 6752232 kB' 'Inactive: 4395652 kB' 'Active(anon): 6182420 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542012 kB' 'Mapped: 168032 kB' 'Shmem: 5649308 kB' 'KReclaimable: 295672 kB' 'Slab: 919596 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623924 kB' 'KernelStack: 24480 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7738080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228368 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.768 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.769 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109968280 kB' 'MemAvailable: 114256948 kB' 'Buffers: 2696 kB' 'Cached: 10612092 kB' 'SwapCached: 0 kB' 'Active: 6752452 kB' 'Inactive: 4395652 kB' 'Active(anon): 6182640 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542600 kB' 'Mapped: 167936 kB' 'Shmem: 5649324 kB' 'KReclaimable: 295672 kB' 'Slab: 919556 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623884 kB' 'KernelStack: 24720 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7739588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228448 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.770 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:39.771 nr_hugepages=1024 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.771 resv_hugepages=0 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.771 surplus_hugepages=0 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.771 anon_hugepages=0 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.771 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109968032 kB' 'MemAvailable: 114256700 kB' 'Buffers: 2696 kB' 'Cached: 10612096 kB' 'SwapCached: 0 kB' 'Active: 6752776 kB' 'Inactive: 4395652 kB' 'Active(anon): 6182964 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543052 kB' 'Mapped: 167936 kB' 'Shmem: 5649328 kB' 'KReclaimable: 295672 kB' 'Slab: 919556 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623884 kB' 'KernelStack: 24800 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7740376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228512 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61255132 kB' 'MemUsed: 4500848 kB' 'SwapCached: 0 kB' 'Active: 1749696 kB' 'Inactive: 226004 kB' 'Active(anon): 1660996 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1671896 kB' 'Mapped: 53392 kB' 'AnonPages: 312900 kB' 'Shmem: 1357192 kB' 'KernelStack: 11896 kB' 'PageTables: 5908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 464532 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 325696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 48713788 kB' 'MemUsed: 11968204 kB' 'SwapCached: 0 kB' 'Active: 5003396 kB' 'Inactive: 4169648 kB' 'Active(anon): 4522284 kB' 'Inactive(anon): 0 kB' 'Active(file): 481112 kB' 'Inactive(file): 4169648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8942916 kB' 'Mapped: 114544 kB' 'AnonPages: 230212 kB' 'Shmem: 4292156 kB' 'KernelStack: 12856 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156836 kB' 'Slab: 455024 kB' 'SReclaimable: 156836 kB' 'SUnreclaim: 298188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.775 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:40.038 node0=512 expecting 512 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:40.038 node1=512 expecting 512 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:40.038 00:02:40.038 real 0m2.816s 00:02:40.038 user 0m0.903s 00:02:40.038 sys 0m1.678s 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:40.038 10:21:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.038 ************************************ 00:02:40.038 END TEST per_node_1G_alloc 00:02:40.038 ************************************ 00:02:40.038 10:21:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:40.038 10:21:55 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:40.038 10:21:55 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:40.038 10:21:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.038 ************************************ 00:02:40.038 START TEST even_2G_alloc 00:02:40.038 ************************************ 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.038 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.039 10:21:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:42.589 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:42.589 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:42.589 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:42.589 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109997912 kB' 'MemAvailable: 114286580 kB' 'Buffers: 2696 kB' 'Cached: 10612232 kB' 'SwapCached: 0 kB' 'Active: 6742968 kB' 'Inactive: 4395652 kB' 'Active(anon): 6173156 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532508 kB' 'Mapped: 166700 kB' 'Shmem: 5649464 kB' 'KReclaimable: 295672 kB' 'Slab: 919420 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623748 kB' 'KernelStack: 24336 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7686308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228224 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.589 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.590 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109999340 kB' 'MemAvailable: 114288008 kB' 'Buffers: 2696 kB' 'Cached: 10612236 kB' 'SwapCached: 0 kB' 'Active: 6744344 kB' 'Inactive: 4395652 kB' 'Active(anon): 6174532 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533916 kB' 'Mapped: 166700 kB' 'Shmem: 5649468 kB' 'KReclaimable: 295672 kB' 'Slab: 919368 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623696 kB' 'KernelStack: 24400 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7689032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228192 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.591 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.592 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109994804 kB' 'MemAvailable: 114283472 kB' 'Buffers: 2696 kB' 'Cached: 10612236 kB' 'SwapCached: 0 kB' 'Active: 6746108 kB' 'Inactive: 4395652 kB' 'Active(anon): 6176296 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535636 kB' 'Mapped: 167168 kB' 'Shmem: 5649468 kB' 'KReclaimable: 295672 kB' 'Slab: 919368 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623696 kB' 'KernelStack: 24368 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7691824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228176 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.593 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.594 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.595 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:42.596 nr_hugepages=1024 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.596 resv_hugepages=0 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.596 surplus_hugepages=0 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.596 anon_hugepages=0 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109990900 kB' 'MemAvailable: 114279568 kB' 'Buffers: 2696 kB' 'Cached: 10612236 kB' 'SwapCached: 0 kB' 'Active: 6748268 kB' 'Inactive: 4395652 kB' 'Active(anon): 6178456 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538252 kB' 'Mapped: 167092 kB' 'Shmem: 5649468 kB' 'KReclaimable: 295672 kB' 'Slab: 919340 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623668 kB' 'KernelStack: 24368 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7694236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228196 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.596 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.597 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61272788 kB' 'MemUsed: 4483192 kB' 'SwapCached: 0 kB' 'Active: 1740776 kB' 'Inactive: 226004 kB' 'Active(anon): 1652076 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1672044 kB' 'Mapped: 52212 kB' 'AnonPages: 303852 kB' 'Shmem: 1357340 kB' 'KernelStack: 11656 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 463968 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 325132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.598 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.599 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 48718160 kB' 'MemUsed: 11963832 kB' 'SwapCached: 0 kB' 'Active: 5002224 kB' 'Inactive: 4169648 kB' 'Active(anon): 4521112 kB' 'Inactive(anon): 0 kB' 'Active(file): 481112 kB' 'Inactive(file): 4169648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8942948 kB' 'Mapped: 114788 kB' 'AnonPages: 229112 kB' 'Shmem: 4292188 kB' 'KernelStack: 12696 kB' 'PageTables: 3120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156836 kB' 'Slab: 455372 kB' 'SReclaimable: 156836 kB' 'SUnreclaim: 298536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.600 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:42.601 node0=512 expecting 512 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:42.601 node1=512 expecting 512 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:42.601 00:02:42.601 real 0m2.687s 00:02:42.601 user 0m0.876s 00:02:42.601 sys 0m1.575s 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:42.601 10:21:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:42.601 ************************************ 00:02:42.601 END TEST even_2G_alloc 00:02:42.601 ************************************ 00:02:42.601 10:21:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:42.601 10:21:58 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:42.601 10:21:58 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:42.601 10:21:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.601 ************************************ 00:02:42.601 START TEST odd_alloc 00:02:42.601 ************************************ 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:42.601 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.602 10:21:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:45.145 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:45.145 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:45.145 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:45.145 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:45.413 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110001644 kB' 'MemAvailable: 114290312 kB' 'Buffers: 2696 kB' 'Cached: 10612380 kB' 'SwapCached: 0 kB' 'Active: 6741764 kB' 'Inactive: 4395652 kB' 'Active(anon): 6171952 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531548 kB' 'Mapped: 166704 kB' 'Shmem: 5649612 kB' 'KReclaimable: 295672 kB' 'Slab: 919584 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623912 kB' 'KernelStack: 24368 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7686620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228240 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.414 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110000888 kB' 'MemAvailable: 114289556 kB' 'Buffers: 2696 kB' 'Cached: 10612380 kB' 'SwapCached: 0 kB' 'Active: 6742356 kB' 'Inactive: 4395652 kB' 'Active(anon): 6172544 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531636 kB' 'Mapped: 166684 kB' 'Shmem: 5649612 kB' 'KReclaimable: 295672 kB' 'Slab: 919584 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623912 kB' 'KernelStack: 24368 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7686644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228240 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.415 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.416 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110000880 kB' 'MemAvailable: 114289548 kB' 'Buffers: 2696 kB' 'Cached: 10612396 kB' 'SwapCached: 0 kB' 'Active: 6741248 kB' 'Inactive: 4395652 kB' 'Active(anon): 6171436 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530960 kB' 'Mapped: 166608 kB' 'Shmem: 5649628 kB' 'KReclaimable: 295672 kB' 'Slab: 919564 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623892 kB' 'KernelStack: 24320 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7686796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228240 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.417 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.418 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:45.419 nr_hugepages=1025 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:45.419 resv_hugepages=0 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:45.419 surplus_hugepages=0 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:45.419 anon_hugepages=0 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110001384 kB' 'MemAvailable: 114290052 kB' 'Buffers: 2696 kB' 'Cached: 10612396 kB' 'SwapCached: 0 kB' 'Active: 6742000 kB' 'Inactive: 4395652 kB' 'Active(anon): 6172188 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531712 kB' 'Mapped: 166608 kB' 'Shmem: 5649628 kB' 'KReclaimable: 295672 kB' 'Slab: 919564 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623892 kB' 'KernelStack: 24352 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7686816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228240 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.419 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.420 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61271952 kB' 'MemUsed: 4484028 kB' 'SwapCached: 0 kB' 'Active: 1738728 kB' 'Inactive: 226004 kB' 'Active(anon): 1650028 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1672140 kB' 'Mapped: 52080 kB' 'AnonPages: 301628 kB' 'Shmem: 1357436 kB' 'KernelStack: 11624 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 463996 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 325160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.421 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 48729628 kB' 'MemUsed: 11952364 kB' 'SwapCached: 0 kB' 'Active: 5002852 kB' 'Inactive: 4169648 kB' 'Active(anon): 4521740 kB' 'Inactive(anon): 0 kB' 'Active(file): 481112 kB' 'Inactive(file): 4169648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8942992 kB' 'Mapped: 114528 kB' 'AnonPages: 229652 kB' 'Shmem: 4292232 kB' 'KernelStack: 12728 kB' 'PageTables: 3048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156836 kB' 'Slab: 455568 kB' 'SReclaimable: 156836 kB' 'SUnreclaim: 298732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.422 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.423 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:45.423 node0=512 expecting 513 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:45.424 node1=513 expecting 512 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:45.424 00:02:45.424 real 0m2.766s 00:02:45.424 user 0m0.890s 00:02:45.424 sys 0m1.635s 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:45.424 10:22:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:45.424 ************************************ 00:02:45.424 END TEST odd_alloc 00:02:45.424 ************************************ 00:02:45.424 10:22:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:45.424 10:22:01 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:45.424 10:22:01 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:45.424 10:22:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.424 ************************************ 00:02:45.424 START TEST custom_alloc 00:02:45.424 ************************************ 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.424 10:22:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:47.965 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:47.965 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.965 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.965 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.966 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 108963984 kB' 'MemAvailable: 113252652 kB' 'Buffers: 2696 kB' 'Cached: 10612552 kB' 'SwapCached: 0 kB' 'Active: 6743008 kB' 'Inactive: 4395652 kB' 'Active(anon): 6173196 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532104 kB' 'Mapped: 166704 kB' 'Shmem: 5649784 kB' 'KReclaimable: 295672 kB' 'Slab: 919032 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623360 kB' 'KernelStack: 24432 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7687644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228320 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.235 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 108965000 kB' 'MemAvailable: 113253668 kB' 'Buffers: 2696 kB' 'Cached: 10612552 kB' 'SwapCached: 0 kB' 'Active: 6742964 kB' 'Inactive: 4395652 kB' 'Active(anon): 6173152 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532136 kB' 'Mapped: 166696 kB' 'Shmem: 5649784 kB' 'KReclaimable: 295672 kB' 'Slab: 919032 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623360 kB' 'KernelStack: 24432 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7687664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:48.236 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.237 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 108965116 kB' 'MemAvailable: 113253784 kB' 'Buffers: 2696 kB' 'Cached: 10612572 kB' 'SwapCached: 0 kB' 'Active: 6742120 kB' 'Inactive: 4395652 kB' 'Active(anon): 6172308 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531712 kB' 'Mapped: 166620 kB' 'Shmem: 5649804 kB' 'KReclaimable: 295672 kB' 'Slab: 919004 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623332 kB' 'KernelStack: 24400 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7687684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.238 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.239 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:48.240 nr_hugepages=1536 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.240 resv_hugepages=0 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.240 surplus_hugepages=0 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.240 anon_hugepages=0 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 108965560 kB' 'MemAvailable: 113254228 kB' 'Buffers: 2696 kB' 'Cached: 10612596 kB' 'SwapCached: 0 kB' 'Active: 6742120 kB' 'Inactive: 4395652 kB' 'Active(anon): 6172308 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531672 kB' 'Mapped: 166620 kB' 'Shmem: 5649828 kB' 'KReclaimable: 295672 kB' 'Slab: 919004 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623332 kB' 'KernelStack: 24384 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7687704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.240 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.241 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61284288 kB' 'MemUsed: 4471692 kB' 'SwapCached: 0 kB' 'Active: 1738920 kB' 'Inactive: 226004 kB' 'Active(anon): 1650220 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1672288 kB' 'Mapped: 52092 kB' 'AnonPages: 301692 kB' 'Shmem: 1357584 kB' 'KernelStack: 11688 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 463952 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 325116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.242 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.243 10:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.243 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.243 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 47681580 kB' 'MemUsed: 13000412 kB' 'SwapCached: 0 kB' 'Active: 5003196 kB' 'Inactive: 4169648 kB' 'Active(anon): 4522084 kB' 'Inactive(anon): 0 kB' 'Active(file): 481112 kB' 'Inactive(file): 4169648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8943024 kB' 'Mapped: 114528 kB' 'AnonPages: 229948 kB' 'Shmem: 4292264 kB' 'KernelStack: 12680 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156836 kB' 'Slab: 455052 kB' 'SReclaimable: 156836 kB' 'SUnreclaim: 298216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.244 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:48.245 node0=512 expecting 512 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:48.245 node1=1024 expecting 1024 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:48.245 00:02:48.245 real 0m2.753s 00:02:48.245 user 0m0.922s 00:02:48.245 sys 0m1.593s 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:48.245 10:22:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:48.245 ************************************ 00:02:48.245 END TEST custom_alloc 00:02:48.245 ************************************ 00:02:48.245 10:22:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:48.245 10:22:04 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:48.245 10:22:04 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:48.245 10:22:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.245 ************************************ 00:02:48.245 START TEST no_shrink_alloc 00:02:48.245 ************************************ 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.245 10:22:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:50.792 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:50.792 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.792 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.792 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110012036 kB' 'MemAvailable: 114300704 kB' 'Buffers: 2696 kB' 'Cached: 10612704 kB' 'SwapCached: 0 kB' 'Active: 6743540 kB' 'Inactive: 4395652 kB' 'Active(anon): 6173728 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532936 kB' 'Mapped: 166684 kB' 'Shmem: 5649936 kB' 'KReclaimable: 295672 kB' 'Slab: 918584 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 622912 kB' 'KernelStack: 24768 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7691036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228608 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.792 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:50.793 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110017864 kB' 'MemAvailable: 114306532 kB' 'Buffers: 2696 kB' 'Cached: 10612704 kB' 'SwapCached: 0 kB' 'Active: 6743992 kB' 'Inactive: 4395652 kB' 'Active(anon): 6174180 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533460 kB' 'Mapped: 166688 kB' 'Shmem: 5649936 kB' 'KReclaimable: 295672 kB' 'Slab: 918524 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 622852 kB' 'KernelStack: 24720 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7691052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228592 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.794 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.795 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.059 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110019248 kB' 'MemAvailable: 114307916 kB' 'Buffers: 2696 kB' 'Cached: 10612724 kB' 'SwapCached: 0 kB' 'Active: 6743144 kB' 'Inactive: 4395652 kB' 'Active(anon): 6173332 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532496 kB' 'Mapped: 166652 kB' 'Shmem: 5649956 kB' 'KReclaimable: 295672 kB' 'Slab: 918524 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 622852 kB' 'KernelStack: 24672 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7689464 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228496 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.060 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.061 nr_hugepages=1024 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.061 resv_hugepages=0 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.061 surplus_hugepages=0 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.061 anon_hugepages=0 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110019536 kB' 'MemAvailable: 114308204 kB' 'Buffers: 2696 kB' 'Cached: 10612744 kB' 'SwapCached: 0 kB' 'Active: 6743436 kB' 'Inactive: 4395652 kB' 'Active(anon): 6173624 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532760 kB' 'Mapped: 166652 kB' 'Shmem: 5649976 kB' 'KReclaimable: 295672 kB' 'Slab: 918468 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 622796 kB' 'KernelStack: 24560 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7691100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228624 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.061 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60234640 kB' 'MemUsed: 5521340 kB' 'SwapCached: 0 kB' 'Active: 1739020 kB' 'Inactive: 226004 kB' 'Active(anon): 1650320 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1672412 kB' 'Mapped: 52108 kB' 'AnonPages: 301664 kB' 'Shmem: 1357708 kB' 'KernelStack: 11704 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 463772 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 324936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.062 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.063 node0=1024 expecting 1024 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.063 10:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:53.615 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:53.615 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:53.615 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:53.615 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:53.615 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110007880 kB' 'MemAvailable: 114296548 kB' 'Buffers: 2696 kB' 'Cached: 10612848 kB' 'SwapCached: 0 kB' 'Active: 6744720 kB' 'Inactive: 4395652 kB' 'Active(anon): 6174908 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533552 kB' 'Mapped: 166756 kB' 'Shmem: 5650080 kB' 'KReclaimable: 295672 kB' 'Slab: 918716 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623044 kB' 'KernelStack: 24512 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7689100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228384 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.615 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.616 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110008612 kB' 'MemAvailable: 114297280 kB' 'Buffers: 2696 kB' 'Cached: 10612848 kB' 'SwapCached: 0 kB' 'Active: 6745224 kB' 'Inactive: 4395652 kB' 'Active(anon): 6175412 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534064 kB' 'Mapped: 166720 kB' 'Shmem: 5650080 kB' 'KReclaimable: 295672 kB' 'Slab: 918684 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623012 kB' 'KernelStack: 24496 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7689116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228352 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.617 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.618 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110008764 kB' 'MemAvailable: 114297432 kB' 'Buffers: 2696 kB' 'Cached: 10612864 kB' 'SwapCached: 0 kB' 'Active: 6744752 kB' 'Inactive: 4395652 kB' 'Active(anon): 6174940 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533988 kB' 'Mapped: 166644 kB' 'Shmem: 5650096 kB' 'KReclaimable: 295672 kB' 'Slab: 918696 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623024 kB' 'KernelStack: 24480 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7689140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228352 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.619 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.620 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.621 nr_hugepages=1024 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.621 resv_hugepages=0 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.621 surplus_hugepages=0 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.621 anon_hugepages=0 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110009016 kB' 'MemAvailable: 114297684 kB' 'Buffers: 2696 kB' 'Cached: 10612888 kB' 'SwapCached: 0 kB' 'Active: 6744356 kB' 'Inactive: 4395652 kB' 'Active(anon): 6174544 kB' 'Inactive(anon): 0 kB' 'Active(file): 569812 kB' 'Inactive(file): 4395652 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533596 kB' 'Mapped: 166644 kB' 'Shmem: 5650120 kB' 'KReclaimable: 295672 kB' 'Slab: 918692 kB' 'SReclaimable: 295672 kB' 'SUnreclaim: 623020 kB' 'KernelStack: 24480 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7689160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228368 kB' 'VmallocChunk: 0 kB' 'Percpu: 88576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2555968 kB' 'DirectMap2M: 21338112 kB' 'DirectMap1G: 112197632 kB' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.621 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.622 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60225584 kB' 'MemUsed: 5530396 kB' 'SwapCached: 0 kB' 'Active: 1742040 kB' 'Inactive: 226004 kB' 'Active(anon): 1653340 kB' 'Inactive(anon): 0 kB' 'Active(file): 88700 kB' 'Inactive(file): 226004 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1672556 kB' 'Mapped: 52116 kB' 'AnonPages: 304584 kB' 'Shmem: 1357852 kB' 'KernelStack: 11736 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138836 kB' 'Slab: 464088 kB' 'SReclaimable: 138836 kB' 'SUnreclaim: 325252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.623 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.624 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:53.625 node0=1024 expecting 1024 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:53.625 00:02:53.625 real 0m5.384s 00:02:53.625 user 0m1.801s 00:02:53.625 sys 0m3.085s 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:53.625 10:22:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.625 ************************************ 00:02:53.625 END TEST no_shrink_alloc 00:02:53.625 ************************************ 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.625 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:53.886 10:22:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:53.886 00:02:53.886 real 0m21.079s 00:02:53.886 user 0m6.584s 00:02:53.886 sys 0m11.808s 00:02:53.886 10:22:09 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:53.886 10:22:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.886 ************************************ 00:02:53.886 END TEST hugepages 00:02:53.886 ************************************ 00:02:53.886 10:22:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:02:53.886 10:22:09 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:53.886 10:22:09 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:53.886 10:22:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:53.886 ************************************ 00:02:53.886 START TEST driver 00:02:53.886 ************************************ 00:02:53.886 10:22:09 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:02:53.886 * Looking for test storage... 00:02:53.886 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:53.886 10:22:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:53.886 10:22:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.886 10:22:09 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.087 10:22:13 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:58.087 10:22:13 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:58.087 10:22:13 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:58.087 10:22:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:58.087 ************************************ 00:02:58.087 START TEST guess_driver 00:02:58.087 ************************************ 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:58.087 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:58.087 Looking for driver=vfio-pci 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.087 10:22:13 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.448 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.449 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.709 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.709 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.709 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.280 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.280 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.280 10:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.280 10:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:02.280 10:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:02.280 10:22:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.281 10:22:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.484 00:03:06.484 real 0m8.438s 00:03:06.484 user 0m2.011s 00:03:06.484 sys 0m3.954s 00:03:06.484 10:22:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:06.484 10:22:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:06.484 ************************************ 00:03:06.484 END TEST guess_driver 00:03:06.484 ************************************ 00:03:06.484 00:03:06.484 real 0m12.811s 00:03:06.484 user 0m3.100s 00:03:06.484 sys 0m6.068s 00:03:06.743 10:22:22 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:06.743 10:22:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:06.743 ************************************ 00:03:06.743 END TEST driver 00:03:06.743 ************************************ 00:03:06.743 10:22:22 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:06.743 10:22:22 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:06.744 10:22:22 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:06.744 10:22:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.744 ************************************ 00:03:06.744 START TEST devices 00:03:06.744 ************************************ 00:03:06.744 10:22:22 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:06.744 * Looking for test storage... 00:03:06.744 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:06.744 10:22:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:06.744 10:22:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:06.744 10:22:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.744 10:22:22 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:10.946 10:22:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:10.946 10:22:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:10.946 10:22:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:10.946 10:22:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:10.946 No valid GPT data, bailing 00:03:10.947 10:22:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.947 10:22:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:10.947 10:22:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:10.947 10:22:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:10.947 10:22:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:10.947 10:22:25 setup.sh.devices -- setup/common.sh@80 -- # echo 960197124096 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:03:00.0 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:03:10.947 10:22:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:10.947 10:22:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:10.947 10:22:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:10.947 No valid GPT data, bailing 00:03:10.947 10:22:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:10.947 10:22:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:10.947 10:22:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:10.947 10:22:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:10.947 10:22:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:10.947 10:22:26 setup.sh.devices -- setup/common.sh@80 -- # echo 960197124096 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:03:00.0 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:10.947 10:22:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:10.947 10:22:26 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:10.947 10:22:26 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:10.947 10:22:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:10.947 ************************************ 00:03:10.947 START TEST nvme_mount 00:03:10.947 ************************************ 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:10.947 10:22:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:11.208 Creating new GPT entries in memory. 00:03:11.209 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:11.209 other utilities. 00:03:11.209 10:22:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:11.209 10:22:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:11.209 10:22:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:11.209 10:22:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:11.209 10:22:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:12.595 Creating new GPT entries in memory. 00:03:12.595 The operation has completed successfully. 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2448223 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.595 10:22:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.142 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.143 10:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:15.404 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:15.404 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:15.664 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:15.664 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:15.664 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:15.664 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.664 10:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.207 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.208 10:22:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:20.753 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:21.015 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:21.015 00:03:21.015 real 0m10.761s 00:03:21.015 user 0m2.680s 00:03:21.015 sys 0m5.211s 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:21.015 10:22:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:21.015 ************************************ 00:03:21.015 END TEST nvme_mount 00:03:21.015 ************************************ 00:03:21.015 10:22:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:21.015 10:22:36 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:21.015 10:22:36 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:21.015 10:22:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:21.277 ************************************ 00:03:21.277 START TEST dm_mount 00:03:21.277 ************************************ 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:21.277 10:22:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:22.218 Creating new GPT entries in memory. 00:03:22.218 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:22.218 other utilities. 00:03:22.218 10:22:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:22.218 10:22:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.218 10:22:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.218 10:22:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.218 10:22:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:23.158 Creating new GPT entries in memory. 00:03:23.158 The operation has completed successfully. 00:03:23.158 10:22:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:23.158 10:22:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:23.158 10:22:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:23.158 10:22:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:23.158 10:22:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:24.099 The operation has completed successfully. 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2452982 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.099 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.359 10:22:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.359 10:22:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:26.963 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.964 10:22:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.510 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:29.771 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.772 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:29.772 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:29.772 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:29.772 10:22:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:29.772 00:03:29.772 real 0m8.597s 00:03:29.772 user 0m1.852s 00:03:29.772 sys 0m3.318s 00:03:29.772 10:22:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:29.772 10:22:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:29.772 ************************************ 00:03:29.772 END TEST dm_mount 00:03:29.772 ************************************ 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.772 10:22:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:30.032 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:30.032 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:30.032 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:30.032 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:30.032 10:22:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:30.032 00:03:30.032 real 0m23.378s 00:03:30.032 user 0m5.742s 00:03:30.032 sys 0m10.981s 00:03:30.032 10:22:45 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:30.032 10:22:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:30.032 ************************************ 00:03:30.032 END TEST devices 00:03:30.032 ************************************ 00:03:30.032 00:03:30.032 real 1m19.446s 00:03:30.032 user 0m21.711s 00:03:30.032 sys 0m41.379s 00:03:30.032 10:22:45 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:30.032 10:22:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.032 ************************************ 00:03:30.032 END TEST setup.sh 00:03:30.032 ************************************ 00:03:30.032 10:22:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:32.601 Hugepages 00:03:32.601 node hugesize free / total 00:03:32.601 node0 1048576kB 0 / 0 00:03:32.601 node0 2048kB 2048 / 2048 00:03:32.601 node1 1048576kB 0 / 0 00:03:32.601 node1 2048kB 0 / 0 00:03:32.601 00:03:32.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:32.861 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:03:32.861 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:32.862 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:32.862 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:32.862 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:32.862 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:32.862 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:32.862 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:32.862 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:32.862 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:03:32.862 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:32.862 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:32.862 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:32.862 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:32.862 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:32.862 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:32.862 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:32.862 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:32.862 10:22:48 -- spdk/autotest.sh@130 -- # uname -s 00:03:32.862 10:22:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:32.862 10:22:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:32.862 10:22:48 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:35.405 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.666 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.666 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.666 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.666 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.666 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.666 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.666 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.926 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.926 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.926 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.926 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.926 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.926 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:35.926 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:35.926 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:36.497 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.756 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:37.015 10:22:52 -- common/autotest_common.sh@1529 -- # sleep 1 00:03:37.956 10:22:53 -- common/autotest_common.sh@1530 -- # bdfs=() 00:03:37.956 10:22:53 -- common/autotest_common.sh@1530 -- # local bdfs 00:03:37.956 10:22:53 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:03:37.956 10:22:53 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:03:37.956 10:22:53 -- common/autotest_common.sh@1510 -- # bdfs=() 00:03:37.956 10:22:53 -- common/autotest_common.sh@1510 -- # local bdfs 00:03:37.956 10:22:53 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.956 10:22:53 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.956 10:22:53 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:03:38.216 10:22:53 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:03:38.216 10:22:53 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:03:38.216 10:22:53 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.762 Waiting for block devices as requested 00:03:40.762 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:03:41.023 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.023 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.284 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.284 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:03:41.284 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.284 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:03:41.545 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.545 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:03:41.545 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.545 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:03:41.805 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:03:41.805 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:03:41.805 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:41.805 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:03:42.067 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:42.067 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:03:42.067 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:03:42.327 10:22:58 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:03:42.327 10:22:58 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:03:00.0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1499 -- # grep 0000:03:00.0/nvme/nvme 00:03:42.327 10:22:58 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 ]] 00:03:42.327 10:22:58 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme1 ]] 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # grep oacs 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # oacs=' 0x5e' 00:03:42.327 10:22:58 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:03:42.327 10:22:58 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:03:42.327 10:22:58 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:03:42.327 10:22:58 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:03:42.327 10:22:58 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:03:42.327 10:22:58 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:03:42.327 10:22:58 -- common/autotest_common.sh@1554 -- # continue 00:03:42.327 10:22:58 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:03:42.327 10:22:58 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:42.327 10:22:58 -- common/autotest_common.sh@1499 -- # grep 0000:c9:00.0/nvme/nvme 00:03:42.327 10:22:58 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:03:42.327 10:22:58 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # grep oacs 00:03:42.327 10:22:58 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:03:42.588 10:22:58 -- common/autotest_common.sh@1542 -- # oacs=' 0x5f' 00:03:42.588 10:22:58 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:03:42.588 10:22:58 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:03:42.588 10:22:58 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:03:42.588 10:22:58 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:03:42.588 10:22:58 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:03:42.588 10:22:58 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:03:42.588 10:22:58 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:03:42.588 10:22:58 -- common/autotest_common.sh@1554 -- # continue 00:03:42.588 10:22:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:42.588 10:22:58 -- common/autotest_common.sh@727 -- # xtrace_disable 00:03:42.588 10:22:58 -- common/autotest_common.sh@10 -- # set +x 00:03:42.588 10:22:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:42.588 10:22:58 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:42.588 10:22:58 -- common/autotest_common.sh@10 -- # set +x 00:03:42.588 10:22:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:45.889 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.890 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:45.890 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.462 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:46.721 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:46.721 10:23:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:46.721 10:23:02 -- common/autotest_common.sh@727 -- # xtrace_disable 00:03:46.721 10:23:02 -- common/autotest_common.sh@10 -- # set +x 00:03:46.980 10:23:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:46.981 10:23:02 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:03:46.981 10:23:02 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:03:46.981 10:23:02 -- common/autotest_common.sh@1574 -- # bdfs=() 00:03:46.981 10:23:02 -- common/autotest_common.sh@1574 -- # local bdfs 00:03:46.981 10:23:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:03:46.981 10:23:02 -- common/autotest_common.sh@1510 -- # bdfs=() 00:03:46.981 10:23:02 -- common/autotest_common.sh@1510 -- # local bdfs 00:03:46.981 10:23:02 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.981 10:23:02 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:46.981 10:23:02 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:03:46.981 10:23:02 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:03:46.981 10:23:02 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:03:46.981 10:23:02 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:03:46.981 10:23:02 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:03:00.0/device 00:03:46.981 10:23:02 -- common/autotest_common.sh@1577 -- # device=0x51c3 00:03:46.981 10:23:02 -- common/autotest_common.sh@1578 -- # [[ 0x51c3 == \0\x\0\a\5\4 ]] 00:03:46.981 10:23:02 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:03:46.981 10:23:02 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:03:46.981 10:23:02 -- common/autotest_common.sh@1577 -- # device=0xa80a 00:03:46.981 10:23:02 -- common/autotest_common.sh@1578 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:46.981 10:23:02 -- common/autotest_common.sh@1583 -- # printf '%s\n' 00:03:46.981 10:23:02 -- common/autotest_common.sh@1589 -- # [[ -z '' ]] 00:03:46.981 10:23:02 -- common/autotest_common.sh@1590 -- # return 0 00:03:46.981 10:23:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:46.981 10:23:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:46.981 10:23:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:46.981 10:23:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:46.981 10:23:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:46.981 10:23:02 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:46.981 10:23:02 -- common/autotest_common.sh@10 -- # set +x 00:03:46.981 10:23:02 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:46.981 10:23:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:46.981 10:23:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:46.981 10:23:02 -- common/autotest_common.sh@10 -- # set +x 00:03:46.981 ************************************ 00:03:46.981 START TEST env 00:03:46.981 ************************************ 00:03:46.981 10:23:02 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:46.981 * Looking for test storage... 00:03:46.981 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:03:46.981 10:23:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:46.981 10:23:02 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:46.981 10:23:02 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:46.981 10:23:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.242 ************************************ 00:03:47.242 START TEST env_memory 00:03:47.242 ************************************ 00:03:47.242 10:23:02 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:47.242 00:03:47.242 00:03:47.242 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.242 http://cunit.sourceforge.net/ 00:03:47.242 00:03:47.242 00:03:47.242 Suite: memory 00:03:47.242 Test: alloc and free memory map ...[2024-05-15 10:23:02.928631] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:47.242 passed 00:03:47.242 Test: mem map translation ...[2024-05-15 10:23:02.975274] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:47.242 [2024-05-15 10:23:02.975307] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:47.242 [2024-05-15 10:23:02.975386] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:47.242 [2024-05-15 10:23:02.975407] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:47.242 passed 00:03:47.242 Test: mem map registration ...[2024-05-15 10:23:03.061631] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:47.242 [2024-05-15 10:23:03.061661] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:47.242 passed 00:03:47.504 Test: mem map adjacent registrations ...passed 00:03:47.504 00:03:47.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.504 suites 1 1 n/a 0 0 00:03:47.504 tests 4 4 4 0 0 00:03:47.504 asserts 152 152 152 0 n/a 00:03:47.504 00:03:47.504 Elapsed time = 0.292 seconds 00:03:47.504 00:03:47.504 real 0m0.316s 00:03:47.504 user 0m0.293s 00:03:47.504 sys 0m0.021s 00:03:47.504 10:23:03 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:47.504 10:23:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:47.504 ************************************ 00:03:47.504 END TEST env_memory 00:03:47.504 ************************************ 00:03:47.504 10:23:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:47.504 10:23:03 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:47.504 10:23:03 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:47.504 10:23:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.504 ************************************ 00:03:47.504 START TEST env_vtophys 00:03:47.504 ************************************ 00:03:47.504 10:23:03 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:47.504 EAL: lib.eal log level changed from notice to debug 00:03:47.504 EAL: Detected lcore 0 as core 0 on socket 0 00:03:47.504 EAL: Detected lcore 1 as core 1 on socket 0 00:03:47.504 EAL: Detected lcore 2 as core 2 on socket 0 00:03:47.504 EAL: Detected lcore 3 as core 3 on socket 0 00:03:47.504 EAL: Detected lcore 4 as core 4 on socket 0 00:03:47.504 EAL: Detected lcore 5 as core 5 on socket 0 00:03:47.504 EAL: Detected lcore 6 as core 6 on socket 0 00:03:47.504 EAL: Detected lcore 7 as core 7 on socket 0 00:03:47.504 EAL: Detected lcore 8 as core 8 on socket 0 00:03:47.504 EAL: Detected lcore 9 as core 9 on socket 0 00:03:47.504 EAL: Detected lcore 10 as core 10 on socket 0 00:03:47.504 EAL: Detected lcore 11 as core 11 on socket 0 00:03:47.504 EAL: Detected lcore 12 as core 12 on socket 0 00:03:47.504 EAL: Detected lcore 13 as core 13 on socket 0 00:03:47.504 EAL: Detected lcore 14 as core 14 on socket 0 00:03:47.504 EAL: Detected lcore 15 as core 15 on socket 0 00:03:47.504 EAL: Detected lcore 16 as core 16 on socket 0 00:03:47.504 EAL: Detected lcore 17 as core 17 on socket 0 00:03:47.504 EAL: Detected lcore 18 as core 18 on socket 0 00:03:47.504 EAL: Detected lcore 19 as core 19 on socket 0 00:03:47.504 EAL: Detected lcore 20 as core 20 on socket 0 00:03:47.504 EAL: Detected lcore 21 as core 21 on socket 0 00:03:47.504 EAL: Detected lcore 22 as core 22 on socket 0 00:03:47.504 EAL: Detected lcore 23 as core 23 on socket 0 00:03:47.504 EAL: Detected lcore 24 as core 24 on socket 0 00:03:47.504 EAL: Detected lcore 25 as core 25 on socket 0 00:03:47.504 EAL: Detected lcore 26 as core 26 on socket 0 00:03:47.504 EAL: Detected lcore 27 as core 27 on socket 0 00:03:47.504 EAL: Detected lcore 28 as core 28 on socket 0 00:03:47.504 EAL: Detected lcore 29 as core 29 on socket 0 00:03:47.504 EAL: Detected lcore 30 as core 30 on socket 0 00:03:47.504 EAL: Detected lcore 31 as core 31 on socket 0 00:03:47.504 EAL: Detected lcore 32 as core 0 on socket 1 00:03:47.504 EAL: Detected lcore 33 as core 1 on socket 1 00:03:47.504 EAL: Detected lcore 34 as core 2 on socket 1 00:03:47.504 EAL: Detected lcore 35 as core 3 on socket 1 00:03:47.504 EAL: Detected lcore 36 as core 4 on socket 1 00:03:47.504 EAL: Detected lcore 37 as core 5 on socket 1 00:03:47.504 EAL: Detected lcore 38 as core 6 on socket 1 00:03:47.504 EAL: Detected lcore 39 as core 7 on socket 1 00:03:47.504 EAL: Detected lcore 40 as core 8 on socket 1 00:03:47.504 EAL: Detected lcore 41 as core 9 on socket 1 00:03:47.504 EAL: Detected lcore 42 as core 10 on socket 1 00:03:47.504 EAL: Detected lcore 43 as core 11 on socket 1 00:03:47.504 EAL: Detected lcore 44 as core 12 on socket 1 00:03:47.504 EAL: Detected lcore 45 as core 13 on socket 1 00:03:47.504 EAL: Detected lcore 46 as core 14 on socket 1 00:03:47.504 EAL: Detected lcore 47 as core 15 on socket 1 00:03:47.504 EAL: Detected lcore 48 as core 16 on socket 1 00:03:47.504 EAL: Detected lcore 49 as core 17 on socket 1 00:03:47.504 EAL: Detected lcore 50 as core 18 on socket 1 00:03:47.504 EAL: Detected lcore 51 as core 19 on socket 1 00:03:47.504 EAL: Detected lcore 52 as core 20 on socket 1 00:03:47.504 EAL: Detected lcore 53 as core 21 on socket 1 00:03:47.504 EAL: Detected lcore 54 as core 22 on socket 1 00:03:47.504 EAL: Detected lcore 55 as core 23 on socket 1 00:03:47.504 EAL: Detected lcore 56 as core 24 on socket 1 00:03:47.504 EAL: Detected lcore 57 as core 25 on socket 1 00:03:47.504 EAL: Detected lcore 58 as core 26 on socket 1 00:03:47.504 EAL: Detected lcore 59 as core 27 on socket 1 00:03:47.504 EAL: Detected lcore 60 as core 28 on socket 1 00:03:47.504 EAL: Detected lcore 61 as core 29 on socket 1 00:03:47.504 EAL: Detected lcore 62 as core 30 on socket 1 00:03:47.504 EAL: Detected lcore 63 as core 31 on socket 1 00:03:47.504 EAL: Detected lcore 64 as core 0 on socket 0 00:03:47.504 EAL: Detected lcore 65 as core 1 on socket 0 00:03:47.504 EAL: Detected lcore 66 as core 2 on socket 0 00:03:47.504 EAL: Detected lcore 67 as core 3 on socket 0 00:03:47.504 EAL: Detected lcore 68 as core 4 on socket 0 00:03:47.504 EAL: Detected lcore 69 as core 5 on socket 0 00:03:47.504 EAL: Detected lcore 70 as core 6 on socket 0 00:03:47.504 EAL: Detected lcore 71 as core 7 on socket 0 00:03:47.504 EAL: Detected lcore 72 as core 8 on socket 0 00:03:47.504 EAL: Detected lcore 73 as core 9 on socket 0 00:03:47.504 EAL: Detected lcore 74 as core 10 on socket 0 00:03:47.504 EAL: Detected lcore 75 as core 11 on socket 0 00:03:47.504 EAL: Detected lcore 76 as core 12 on socket 0 00:03:47.504 EAL: Detected lcore 77 as core 13 on socket 0 00:03:47.504 EAL: Detected lcore 78 as core 14 on socket 0 00:03:47.504 EAL: Detected lcore 79 as core 15 on socket 0 00:03:47.504 EAL: Detected lcore 80 as core 16 on socket 0 00:03:47.504 EAL: Detected lcore 81 as core 17 on socket 0 00:03:47.504 EAL: Detected lcore 82 as core 18 on socket 0 00:03:47.504 EAL: Detected lcore 83 as core 19 on socket 0 00:03:47.504 EAL: Detected lcore 84 as core 20 on socket 0 00:03:47.504 EAL: Detected lcore 85 as core 21 on socket 0 00:03:47.504 EAL: Detected lcore 86 as core 22 on socket 0 00:03:47.504 EAL: Detected lcore 87 as core 23 on socket 0 00:03:47.504 EAL: Detected lcore 88 as core 24 on socket 0 00:03:47.504 EAL: Detected lcore 89 as core 25 on socket 0 00:03:47.504 EAL: Detected lcore 90 as core 26 on socket 0 00:03:47.504 EAL: Detected lcore 91 as core 27 on socket 0 00:03:47.504 EAL: Detected lcore 92 as core 28 on socket 0 00:03:47.504 EAL: Detected lcore 93 as core 29 on socket 0 00:03:47.504 EAL: Detected lcore 94 as core 30 on socket 0 00:03:47.504 EAL: Detected lcore 95 as core 31 on socket 0 00:03:47.504 EAL: Detected lcore 96 as core 0 on socket 1 00:03:47.504 EAL: Detected lcore 97 as core 1 on socket 1 00:03:47.504 EAL: Detected lcore 98 as core 2 on socket 1 00:03:47.504 EAL: Detected lcore 99 as core 3 on socket 1 00:03:47.504 EAL: Detected lcore 100 as core 4 on socket 1 00:03:47.504 EAL: Detected lcore 101 as core 5 on socket 1 00:03:47.504 EAL: Detected lcore 102 as core 6 on socket 1 00:03:47.504 EAL: Detected lcore 103 as core 7 on socket 1 00:03:47.504 EAL: Detected lcore 104 as core 8 on socket 1 00:03:47.504 EAL: Detected lcore 105 as core 9 on socket 1 00:03:47.504 EAL: Detected lcore 106 as core 10 on socket 1 00:03:47.504 EAL: Detected lcore 107 as core 11 on socket 1 00:03:47.504 EAL: Detected lcore 108 as core 12 on socket 1 00:03:47.504 EAL: Detected lcore 109 as core 13 on socket 1 00:03:47.504 EAL: Detected lcore 110 as core 14 on socket 1 00:03:47.504 EAL: Detected lcore 111 as core 15 on socket 1 00:03:47.504 EAL: Detected lcore 112 as core 16 on socket 1 00:03:47.504 EAL: Detected lcore 113 as core 17 on socket 1 00:03:47.504 EAL: Detected lcore 114 as core 18 on socket 1 00:03:47.504 EAL: Detected lcore 115 as core 19 on socket 1 00:03:47.504 EAL: Detected lcore 116 as core 20 on socket 1 00:03:47.504 EAL: Detected lcore 117 as core 21 on socket 1 00:03:47.504 EAL: Detected lcore 118 as core 22 on socket 1 00:03:47.504 EAL: Detected lcore 119 as core 23 on socket 1 00:03:47.504 EAL: Detected lcore 120 as core 24 on socket 1 00:03:47.504 EAL: Detected lcore 121 as core 25 on socket 1 00:03:47.504 EAL: Detected lcore 122 as core 26 on socket 1 00:03:47.504 EAL: Detected lcore 123 as core 27 on socket 1 00:03:47.504 EAL: Detected lcore 124 as core 28 on socket 1 00:03:47.504 EAL: Detected lcore 125 as core 29 on socket 1 00:03:47.504 EAL: Detected lcore 126 as core 30 on socket 1 00:03:47.504 EAL: Detected lcore 127 as core 31 on socket 1 00:03:47.504 EAL: Maximum logical cores by configuration: 128 00:03:47.504 EAL: Detected CPU lcores: 128 00:03:47.504 EAL: Detected NUMA nodes: 2 00:03:47.504 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:47.504 EAL: Detected shared linkage of DPDK 00:03:47.504 EAL: No shared files mode enabled, IPC will be disabled 00:03:47.504 EAL: Bus pci wants IOVA as 'DC' 00:03:47.504 EAL: Buses did not request a specific IOVA mode. 00:03:47.504 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:47.504 EAL: Selected IOVA mode 'VA' 00:03:47.504 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.504 EAL: Probing VFIO support... 00:03:47.504 EAL: IOMMU type 1 (Type 1) is supported 00:03:47.504 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:47.504 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:47.504 EAL: VFIO support initialized 00:03:47.504 EAL: Ask a virtual area of 0x2e000 bytes 00:03:47.504 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:47.504 EAL: Setting up physically contiguous memory... 00:03:47.504 EAL: Setting maximum number of open files to 524288 00:03:47.504 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:47.504 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:47.504 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:47.504 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.504 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:47.766 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:47.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.766 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:47.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.766 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:47.766 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:47.766 EAL: Hugepages will be freed exactly as allocated. 00:03:47.766 EAL: No shared files mode enabled, IPC is disabled 00:03:47.766 EAL: No shared files mode enabled, IPC is disabled 00:03:47.766 EAL: TSC frequency is ~1900000 KHz 00:03:47.766 EAL: Main lcore 0 is ready (tid=7f78f0453a40;cpuset=[0]) 00:03:47.766 EAL: Trying to obtain current memory policy. 00:03:47.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.766 EAL: Restoring previous memory policy: 0 00:03:47.766 EAL: request: mp_malloc_sync 00:03:47.766 EAL: No shared files mode enabled, IPC is disabled 00:03:47.766 EAL: Heap on socket 0 was expanded by 2MB 00:03:47.766 EAL: No shared files mode enabled, IPC is disabled 00:03:47.766 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:47.766 EAL: Mem event callback 'spdk:(nil)' registered 00:03:47.766 00:03:47.766 00:03:47.766 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.766 http://cunit.sourceforge.net/ 00:03:47.766 00:03:47.766 00:03:47.766 Suite: components_suite 00:03:48.062 Test: vtophys_malloc_test ...passed 00:03:48.062 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.062 EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.062 EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.062 EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.062 EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.062 EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.062 EAL: Trying to obtain current memory policy. 00:03:48.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.062 EAL: Restoring previous memory policy: 4 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.062 EAL: request: mp_malloc_sync 00:03:48.062 EAL: No shared files mode enabled, IPC is disabled 00:03:48.062 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.062 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.323 EAL: request: mp_malloc_sync 00:03:48.323 EAL: No shared files mode enabled, IPC is disabled 00:03:48.323 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.323 EAL: Trying to obtain current memory policy. 00:03:48.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.323 EAL: Restoring previous memory policy: 4 00:03:48.323 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.323 EAL: request: mp_malloc_sync 00:03:48.323 EAL: No shared files mode enabled, IPC is disabled 00:03:48.323 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.323 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.323 EAL: request: mp_malloc_sync 00:03:48.323 EAL: No shared files mode enabled, IPC is disabled 00:03:48.323 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.583 EAL: Trying to obtain current memory policy. 00:03:48.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.583 EAL: Restoring previous memory policy: 4 00:03:48.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.583 EAL: request: mp_malloc_sync 00:03:48.583 EAL: No shared files mode enabled, IPC is disabled 00:03:48.583 EAL: Heap on socket 0 was expanded by 514MB 00:03:48.843 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.105 EAL: request: mp_malloc_sync 00:03:49.105 EAL: No shared files mode enabled, IPC is disabled 00:03:49.105 EAL: Heap on socket 0 was shrunk by 514MB 00:03:49.367 EAL: Trying to obtain current memory policy. 00:03:49.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.367 EAL: Restoring previous memory policy: 4 00:03:49.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.367 EAL: request: mp_malloc_sync 00:03:49.367 EAL: No shared files mode enabled, IPC is disabled 00:03:49.367 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.937 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.197 EAL: request: mp_malloc_sync 00:03:50.197 EAL: No shared files mode enabled, IPC is disabled 00:03:50.197 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:50.768 passed 00:03:50.768 00:03:50.768 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.768 suites 1 1 n/a 0 0 00:03:50.768 tests 2 2 2 0 0 00:03:50.768 asserts 497 497 497 0 n/a 00:03:50.768 00:03:50.768 Elapsed time = 2.881 seconds 00:03:50.768 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.768 EAL: request: mp_malloc_sync 00:03:50.768 EAL: No shared files mode enabled, IPC is disabled 00:03:50.768 EAL: Heap on socket 0 was shrunk by 2MB 00:03:50.768 EAL: No shared files mode enabled, IPC is disabled 00:03:50.768 EAL: No shared files mode enabled, IPC is disabled 00:03:50.768 EAL: No shared files mode enabled, IPC is disabled 00:03:50.768 00:03:50.768 real 0m3.129s 00:03:50.768 user 0m2.453s 00:03:50.768 sys 0m0.627s 00:03:50.768 10:23:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:50.768 10:23:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:50.768 ************************************ 00:03:50.768 END TEST env_vtophys 00:03:50.768 ************************************ 00:03:50.768 10:23:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:03:50.768 10:23:06 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:50.768 10:23:06 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:50.768 10:23:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.768 ************************************ 00:03:50.768 START TEST env_pci 00:03:50.768 ************************************ 00:03:50.768 10:23:06 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:03:50.768 00:03:50.768 00:03:50.768 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.768 http://cunit.sourceforge.net/ 00:03:50.768 00:03:50.768 00:03:50.768 Suite: pci 00:03:50.768 Test: pci_hook ...[2024-05-15 10:23:06.481627] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2464176 has claimed it 00:03:50.768 EAL: Cannot find device (10000:00:01.0) 00:03:50.768 EAL: Failed to attach device on primary process 00:03:50.768 passed 00:03:50.768 00:03:50.768 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.768 suites 1 1 n/a 0 0 00:03:50.768 tests 1 1 1 0 0 00:03:50.768 asserts 25 25 25 0 n/a 00:03:50.768 00:03:50.768 Elapsed time = 0.058 seconds 00:03:50.768 00:03:50.768 real 0m0.118s 00:03:50.768 user 0m0.039s 00:03:50.768 sys 0m0.078s 00:03:50.768 10:23:06 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:50.768 10:23:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:50.768 ************************************ 00:03:50.768 END TEST env_pci 00:03:50.768 ************************************ 00:03:50.768 10:23:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:50.768 10:23:06 env -- env/env.sh@15 -- # uname 00:03:50.768 10:23:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:50.768 10:23:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:50.768 10:23:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.768 10:23:06 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:03:50.768 10:23:06 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:50.768 10:23:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.768 ************************************ 00:03:50.768 START TEST env_dpdk_post_init 00:03:50.768 ************************************ 00:03:50.768 10:23:06 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.029 EAL: Detected CPU lcores: 128 00:03:51.029 EAL: Detected NUMA nodes: 2 00:03:51.029 EAL: Detected shared linkage of DPDK 00:03:51.029 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.029 EAL: Selected IOVA mode 'VA' 00:03:51.029 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.029 EAL: VFIO support initialized 00:03:51.029 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.029 EAL: Using IOMMU type 1 (Type 1) 00:03:51.288 EAL: Probe PCI driver: spdk_nvme (1344:51c3) device: 0000:03:00.0 (socket 0) 00:03:51.548 EAL: Ignore mapping IO port bar(1) 00:03:51.548 EAL: Ignore mapping IO port bar(3) 00:03:51.548 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:03:51.548 EAL: Ignore mapping IO port bar(1) 00:03:51.548 EAL: Ignore mapping IO port bar(3) 00:03:51.808 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:03:51.808 EAL: Ignore mapping IO port bar(1) 00:03:51.808 EAL: Ignore mapping IO port bar(3) 00:03:52.069 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:03:52.069 EAL: Ignore mapping IO port bar(1) 00:03:52.069 EAL: Ignore mapping IO port bar(3) 00:03:52.330 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:03:52.330 EAL: Ignore mapping IO port bar(1) 00:03:52.330 EAL: Ignore mapping IO port bar(3) 00:03:52.330 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:03:52.592 EAL: Ignore mapping IO port bar(1) 00:03:52.592 EAL: Ignore mapping IO port bar(3) 00:03:52.592 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:03:52.853 EAL: Ignore mapping IO port bar(1) 00:03:52.853 EAL: Ignore mapping IO port bar(3) 00:03:52.853 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:03:53.114 EAL: Ignore mapping IO port bar(1) 00:03:53.114 EAL: Ignore mapping IO port bar(3) 00:03:53.114 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:03:53.376 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:c9:00.0 (socket 1) 00:03:53.376 EAL: Ignore mapping IO port bar(1) 00:03:53.376 EAL: Ignore mapping IO port bar(3) 00:03:53.636 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:03:53.636 EAL: Ignore mapping IO port bar(1) 00:03:53.636 EAL: Ignore mapping IO port bar(3) 00:03:53.897 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:03:53.897 EAL: Ignore mapping IO port bar(1) 00:03:53.897 EAL: Ignore mapping IO port bar(3) 00:03:53.897 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:03:54.157 EAL: Ignore mapping IO port bar(1) 00:03:54.158 EAL: Ignore mapping IO port bar(3) 00:03:54.158 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:03:54.418 EAL: Ignore mapping IO port bar(1) 00:03:54.418 EAL: Ignore mapping IO port bar(3) 00:03:54.418 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:03:54.679 EAL: Ignore mapping IO port bar(1) 00:03:54.679 EAL: Ignore mapping IO port bar(3) 00:03:54.679 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:03:54.940 EAL: Ignore mapping IO port bar(1) 00:03:54.940 EAL: Ignore mapping IO port bar(3) 00:03:54.940 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:03:54.940 EAL: Ignore mapping IO port bar(1) 00:03:54.940 EAL: Ignore mapping IO port bar(3) 00:03:55.200 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:03:55.771 EAL: Releasing PCI mapped resource for 0000:03:00.0 00:03:55.771 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x202001000000 00:03:56.031 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:03:56.031 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x2020011c0000 00:03:56.292 Starting DPDK initialization... 00:03:56.292 Starting SPDK post initialization... 00:03:56.292 SPDK NVMe probe 00:03:56.292 Attaching to 0000:03:00.0 00:03:56.292 Attaching to 0000:c9:00.0 00:03:56.292 Attached to 0000:c9:00.0 00:03:56.292 Attached to 0000:03:00.0 00:03:56.292 Cleaning up... 00:03:58.202 00:03:58.202 real 0m6.949s 00:03:58.202 user 0m1.071s 00:03:58.202 sys 0m0.171s 00:03:58.202 10:23:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:58.202 10:23:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.202 ************************************ 00:03:58.202 END TEST env_dpdk_post_init 00:03:58.202 ************************************ 00:03:58.202 10:23:13 env -- env/env.sh@26 -- # uname 00:03:58.202 10:23:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:58.202 10:23:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.202 10:23:13 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:58.202 10:23:13 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:58.202 10:23:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.202 ************************************ 00:03:58.202 START TEST env_mem_callbacks 00:03:58.202 ************************************ 00:03:58.202 10:23:13 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.202 EAL: Detected CPU lcores: 128 00:03:58.202 EAL: Detected NUMA nodes: 2 00:03:58.202 EAL: Detected shared linkage of DPDK 00:03:58.202 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.203 EAL: Selected IOVA mode 'VA' 00:03:58.203 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.203 EAL: VFIO support initialized 00:03:58.203 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.203 00:03:58.203 00:03:58.203 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.203 http://cunit.sourceforge.net/ 00:03:58.203 00:03:58.203 00:03:58.203 Suite: memory 00:03:58.203 Test: test ... 00:03:58.203 register 0x200000200000 2097152 00:03:58.203 malloc 3145728 00:03:58.203 register 0x200000400000 4194304 00:03:58.203 buf 0x2000004fffc0 len 3145728 PASSED 00:03:58.203 malloc 64 00:03:58.203 buf 0x2000004ffec0 len 64 PASSED 00:03:58.203 malloc 4194304 00:03:58.203 register 0x200000800000 6291456 00:03:58.203 buf 0x2000009fffc0 len 4194304 PASSED 00:03:58.203 free 0x2000004fffc0 3145728 00:03:58.203 free 0x2000004ffec0 64 00:03:58.203 unregister 0x200000400000 4194304 PASSED 00:03:58.203 free 0x2000009fffc0 4194304 00:03:58.203 unregister 0x200000800000 6291456 PASSED 00:03:58.203 malloc 8388608 00:03:58.203 register 0x200000400000 10485760 00:03:58.203 buf 0x2000005fffc0 len 8388608 PASSED 00:03:58.203 free 0x2000005fffc0 8388608 00:03:58.203 unregister 0x200000400000 10485760 PASSED 00:03:58.203 passed 00:03:58.203 00:03:58.203 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.203 suites 1 1 n/a 0 0 00:03:58.203 tests 1 1 1 0 0 00:03:58.203 asserts 15 15 15 0 n/a 00:03:58.203 00:03:58.203 Elapsed time = 0.023 seconds 00:03:58.203 00:03:58.203 real 0m0.136s 00:03:58.203 user 0m0.058s 00:03:58.203 sys 0m0.078s 00:03:58.203 10:23:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:58.203 10:23:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:58.203 ************************************ 00:03:58.203 END TEST env_mem_callbacks 00:03:58.203 ************************************ 00:03:58.203 00:03:58.203 real 0m11.043s 00:03:58.203 user 0m4.059s 00:03:58.203 sys 0m1.243s 00:03:58.203 10:23:13 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:58.203 10:23:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.203 ************************************ 00:03:58.203 END TEST env 00:03:58.203 ************************************ 00:03:58.203 10:23:13 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.203 10:23:13 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:58.203 10:23:13 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:58.203 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:03:58.203 ************************************ 00:03:58.203 START TEST rpc 00:03:58.203 ************************************ 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.203 * Looking for test storage... 00:03:58.203 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:03:58.203 10:23:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2465763 00:03:58.203 10:23:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.203 10:23:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2465763 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@828 -- # '[' -z 2465763 ']' 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:03:58.203 10:23:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.203 10:23:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:58.203 [2024-05-15 10:23:14.047263] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:03:58.203 [2024-05-15 10:23:14.047409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465763 ] 00:03:58.463 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.463 [2024-05-15 10:23:14.177899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.463 [2024-05-15 10:23:14.270048] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:58.463 [2024-05-15 10:23:14.270097] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2465763' to capture a snapshot of events at runtime. 00:03:58.463 [2024-05-15 10:23:14.270109] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:58.463 [2024-05-15 10:23:14.270119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:58.463 [2024-05-15 10:23:14.270128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2465763 for offline analysis/debug. 00:03:58.463 [2024-05-15 10:23:14.270164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.034 10:23:14 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:03:59.034 10:23:14 rpc -- common/autotest_common.sh@861 -- # return 0 00:03:59.034 10:23:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:03:59.034 10:23:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:03:59.034 10:23:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:59.034 10:23:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:59.034 10:23:14 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.034 10:23:14 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.034 10:23:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 ************************************ 00:03:59.034 START TEST rpc_integrity 00:03:59.034 ************************************ 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.034 { 00:03:59.034 "name": "Malloc0", 00:03:59.034 "aliases": [ 00:03:59.034 "67f0a6de-47b6-4e1f-9bbe-f7642083a7a8" 00:03:59.034 ], 00:03:59.034 "product_name": "Malloc disk", 00:03:59.034 "block_size": 512, 00:03:59.034 "num_blocks": 16384, 00:03:59.034 "uuid": "67f0a6de-47b6-4e1f-9bbe-f7642083a7a8", 00:03:59.034 "assigned_rate_limits": { 00:03:59.034 "rw_ios_per_sec": 0, 00:03:59.034 "rw_mbytes_per_sec": 0, 00:03:59.034 "r_mbytes_per_sec": 0, 00:03:59.034 "w_mbytes_per_sec": 0 00:03:59.034 }, 00:03:59.034 "claimed": false, 00:03:59.034 "zoned": false, 00:03:59.034 "supported_io_types": { 00:03:59.034 "read": true, 00:03:59.034 "write": true, 00:03:59.034 "unmap": true, 00:03:59.034 "write_zeroes": true, 00:03:59.034 "flush": true, 00:03:59.034 "reset": true, 00:03:59.034 "compare": false, 00:03:59.034 "compare_and_write": false, 00:03:59.034 "abort": true, 00:03:59.034 "nvme_admin": false, 00:03:59.034 "nvme_io": false 00:03:59.034 }, 00:03:59.034 "memory_domains": [ 00:03:59.034 { 00:03:59.034 "dma_device_id": "system", 00:03:59.034 "dma_device_type": 1 00:03:59.034 }, 00:03:59.034 { 00:03:59.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.034 "dma_device_type": 2 00:03:59.034 } 00:03:59.034 ], 00:03:59.034 "driver_specific": {} 00:03:59.034 } 00:03:59.034 ]' 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.034 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 [2024-05-15 10:23:14.902274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:59.034 [2024-05-15 10:23:14.902329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.034 [2024-05-15 10:23:14.902359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:03:59.034 [2024-05-15 10:23:14.902369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.034 [2024-05-15 10:23:14.904122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.034 [2024-05-15 10:23:14.904152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.034 Passthru0 00:03:59.034 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.294 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.294 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.294 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.294 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.295 { 00:03:59.295 "name": "Malloc0", 00:03:59.295 "aliases": [ 00:03:59.295 "67f0a6de-47b6-4e1f-9bbe-f7642083a7a8" 00:03:59.295 ], 00:03:59.295 "product_name": "Malloc disk", 00:03:59.295 "block_size": 512, 00:03:59.295 "num_blocks": 16384, 00:03:59.295 "uuid": "67f0a6de-47b6-4e1f-9bbe-f7642083a7a8", 00:03:59.295 "assigned_rate_limits": { 00:03:59.295 "rw_ios_per_sec": 0, 00:03:59.295 "rw_mbytes_per_sec": 0, 00:03:59.295 "r_mbytes_per_sec": 0, 00:03:59.295 "w_mbytes_per_sec": 0 00:03:59.295 }, 00:03:59.295 "claimed": true, 00:03:59.295 "claim_type": "exclusive_write", 00:03:59.295 "zoned": false, 00:03:59.295 "supported_io_types": { 00:03:59.295 "read": true, 00:03:59.295 "write": true, 00:03:59.295 "unmap": true, 00:03:59.295 "write_zeroes": true, 00:03:59.295 "flush": true, 00:03:59.295 "reset": true, 00:03:59.295 "compare": false, 00:03:59.295 "compare_and_write": false, 00:03:59.295 "abort": true, 00:03:59.295 "nvme_admin": false, 00:03:59.295 "nvme_io": false 00:03:59.295 }, 00:03:59.295 "memory_domains": [ 00:03:59.295 { 00:03:59.295 "dma_device_id": "system", 00:03:59.295 "dma_device_type": 1 00:03:59.295 }, 00:03:59.295 { 00:03:59.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.295 "dma_device_type": 2 00:03:59.295 } 00:03:59.295 ], 00:03:59.295 "driver_specific": {} 00:03:59.295 }, 00:03:59.295 { 00:03:59.295 "name": "Passthru0", 00:03:59.295 "aliases": [ 00:03:59.295 "780cbdce-6aef-5ce0-baa7-a7dd2d65f724" 00:03:59.295 ], 00:03:59.295 "product_name": "passthru", 00:03:59.295 "block_size": 512, 00:03:59.295 "num_blocks": 16384, 00:03:59.295 "uuid": "780cbdce-6aef-5ce0-baa7-a7dd2d65f724", 00:03:59.295 "assigned_rate_limits": { 00:03:59.295 "rw_ios_per_sec": 0, 00:03:59.295 "rw_mbytes_per_sec": 0, 00:03:59.295 "r_mbytes_per_sec": 0, 00:03:59.295 "w_mbytes_per_sec": 0 00:03:59.295 }, 00:03:59.295 "claimed": false, 00:03:59.295 "zoned": false, 00:03:59.295 "supported_io_types": { 00:03:59.295 "read": true, 00:03:59.295 "write": true, 00:03:59.295 "unmap": true, 00:03:59.295 "write_zeroes": true, 00:03:59.295 "flush": true, 00:03:59.295 "reset": true, 00:03:59.295 "compare": false, 00:03:59.295 "compare_and_write": false, 00:03:59.295 "abort": true, 00:03:59.295 "nvme_admin": false, 00:03:59.295 "nvme_io": false 00:03:59.295 }, 00:03:59.295 "memory_domains": [ 00:03:59.295 { 00:03:59.295 "dma_device_id": "system", 00:03:59.295 "dma_device_type": 1 00:03:59.295 }, 00:03:59.295 { 00:03:59.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.295 "dma_device_type": 2 00:03:59.295 } 00:03:59.295 ], 00:03:59.295 "driver_specific": { 00:03:59.295 "passthru": { 00:03:59.295 "name": "Passthru0", 00:03:59.295 "base_bdev_name": "Malloc0" 00:03:59.295 } 00:03:59.295 } 00:03:59.295 } 00:03:59.295 ]' 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 10:23:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.295 10:23:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.295 10:23:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.295 00:03:59.295 real 0m0.240s 00:03:59.295 user 0m0.128s 00:03:59.295 sys 0m0.040s 00:03:59.295 10:23:15 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.295 10:23:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 ************************************ 00:03:59.295 END TEST rpc_integrity 00:03:59.295 ************************************ 00:03:59.295 10:23:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:59.295 10:23:15 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.295 10:23:15 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.295 10:23:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 ************************************ 00:03:59.295 START TEST rpc_plugins 00:03:59.295 ************************************ 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:59.295 { 00:03:59.295 "name": "Malloc1", 00:03:59.295 "aliases": [ 00:03:59.295 "ab0884e4-3057-4a43-b88f-2b3b1e769de2" 00:03:59.295 ], 00:03:59.295 "product_name": "Malloc disk", 00:03:59.295 "block_size": 4096, 00:03:59.295 "num_blocks": 256, 00:03:59.295 "uuid": "ab0884e4-3057-4a43-b88f-2b3b1e769de2", 00:03:59.295 "assigned_rate_limits": { 00:03:59.295 "rw_ios_per_sec": 0, 00:03:59.295 "rw_mbytes_per_sec": 0, 00:03:59.295 "r_mbytes_per_sec": 0, 00:03:59.295 "w_mbytes_per_sec": 0 00:03:59.295 }, 00:03:59.295 "claimed": false, 00:03:59.295 "zoned": false, 00:03:59.295 "supported_io_types": { 00:03:59.295 "read": true, 00:03:59.295 "write": true, 00:03:59.295 "unmap": true, 00:03:59.295 "write_zeroes": true, 00:03:59.295 "flush": true, 00:03:59.295 "reset": true, 00:03:59.295 "compare": false, 00:03:59.295 "compare_and_write": false, 00:03:59.295 "abort": true, 00:03:59.295 "nvme_admin": false, 00:03:59.295 "nvme_io": false 00:03:59.295 }, 00:03:59.295 "memory_domains": [ 00:03:59.295 { 00:03:59.295 "dma_device_id": "system", 00:03:59.295 "dma_device_type": 1 00:03:59.295 }, 00:03:59.295 { 00:03:59.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.295 "dma_device_type": 2 00:03:59.295 } 00:03:59.295 ], 00:03:59.295 "driver_specific": {} 00:03:59.295 } 00:03:59.295 ]' 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:59.295 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.295 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.556 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:59.556 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.556 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.556 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.556 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:59.556 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:59.556 10:23:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:59.556 00:03:59.556 real 0m0.114s 00:03:59.556 user 0m0.068s 00:03:59.556 sys 0m0.016s 00:03:59.556 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.556 10:23:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.556 ************************************ 00:03:59.556 END TEST rpc_plugins 00:03:59.556 ************************************ 00:03:59.556 10:23:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:59.556 10:23:15 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.556 10:23:15 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.556 10:23:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.556 ************************************ 00:03:59.556 START TEST rpc_trace_cmd_test 00:03:59.556 ************************************ 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:59.556 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2465763", 00:03:59.556 "tpoint_group_mask": "0x8", 00:03:59.556 "iscsi_conn": { 00:03:59.556 "mask": "0x2", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "scsi": { 00:03:59.556 "mask": "0x4", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "bdev": { 00:03:59.556 "mask": "0x8", 00:03:59.556 "tpoint_mask": "0xffffffffffffffff" 00:03:59.556 }, 00:03:59.556 "nvmf_rdma": { 00:03:59.556 "mask": "0x10", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "nvmf_tcp": { 00:03:59.556 "mask": "0x20", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "ftl": { 00:03:59.556 "mask": "0x40", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "blobfs": { 00:03:59.556 "mask": "0x80", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "dsa": { 00:03:59.556 "mask": "0x200", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "thread": { 00:03:59.556 "mask": "0x400", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "nvme_pcie": { 00:03:59.556 "mask": "0x800", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "iaa": { 00:03:59.556 "mask": "0x1000", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "nvme_tcp": { 00:03:59.556 "mask": "0x2000", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "bdev_nvme": { 00:03:59.556 "mask": "0x4000", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 }, 00:03:59.556 "sock": { 00:03:59.556 "mask": "0x8000", 00:03:59.556 "tpoint_mask": "0x0" 00:03:59.556 } 00:03:59.556 }' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:59.556 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:59.817 10:23:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:59.817 00:03:59.817 real 0m0.169s 00:03:59.817 user 0m0.140s 00:03:59.817 sys 0m0.021s 00:03:59.817 10:23:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.817 10:23:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.817 ************************************ 00:03:59.817 END TEST rpc_trace_cmd_test 00:03:59.817 ************************************ 00:03:59.817 10:23:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:59.817 10:23:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:59.817 10:23:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:59.817 10:23:15 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.817 10:23:15 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.817 10:23:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.817 ************************************ 00:03:59.817 START TEST rpc_daemon_integrity 00:03:59.817 ************************************ 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.817 { 00:03:59.817 "name": "Malloc2", 00:03:59.817 "aliases": [ 00:03:59.817 "64a94953-0a0c-47ae-8936-d28fd1f77dac" 00:03:59.817 ], 00:03:59.817 "product_name": "Malloc disk", 00:03:59.817 "block_size": 512, 00:03:59.817 "num_blocks": 16384, 00:03:59.817 "uuid": "64a94953-0a0c-47ae-8936-d28fd1f77dac", 00:03:59.817 "assigned_rate_limits": { 00:03:59.817 "rw_ios_per_sec": 0, 00:03:59.817 "rw_mbytes_per_sec": 0, 00:03:59.817 "r_mbytes_per_sec": 0, 00:03:59.817 "w_mbytes_per_sec": 0 00:03:59.817 }, 00:03:59.817 "claimed": false, 00:03:59.817 "zoned": false, 00:03:59.817 "supported_io_types": { 00:03:59.817 "read": true, 00:03:59.817 "write": true, 00:03:59.817 "unmap": true, 00:03:59.817 "write_zeroes": true, 00:03:59.817 "flush": true, 00:03:59.817 "reset": true, 00:03:59.817 "compare": false, 00:03:59.817 "compare_and_write": false, 00:03:59.817 "abort": true, 00:03:59.817 "nvme_admin": false, 00:03:59.817 "nvme_io": false 00:03:59.817 }, 00:03:59.817 "memory_domains": [ 00:03:59.817 { 00:03:59.817 "dma_device_id": "system", 00:03:59.817 "dma_device_type": 1 00:03:59.817 }, 00:03:59.817 { 00:03:59.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.817 "dma_device_type": 2 00:03:59.817 } 00:03:59.817 ], 00:03:59.817 "driver_specific": {} 00:03:59.817 } 00:03:59.817 ]' 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.817 [2024-05-15 10:23:15.616365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:59.817 [2024-05-15 10:23:15.616411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.817 [2024-05-15 10:23:15.616436] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:03:59.817 [2024-05-15 10:23:15.616445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.817 [2024-05-15 10:23:15.618152] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.817 [2024-05-15 10:23:15.618182] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.817 Passthru0 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.817 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.818 { 00:03:59.818 "name": "Malloc2", 00:03:59.818 "aliases": [ 00:03:59.818 "64a94953-0a0c-47ae-8936-d28fd1f77dac" 00:03:59.818 ], 00:03:59.818 "product_name": "Malloc disk", 00:03:59.818 "block_size": 512, 00:03:59.818 "num_blocks": 16384, 00:03:59.818 "uuid": "64a94953-0a0c-47ae-8936-d28fd1f77dac", 00:03:59.818 "assigned_rate_limits": { 00:03:59.818 "rw_ios_per_sec": 0, 00:03:59.818 "rw_mbytes_per_sec": 0, 00:03:59.818 "r_mbytes_per_sec": 0, 00:03:59.818 "w_mbytes_per_sec": 0 00:03:59.818 }, 00:03:59.818 "claimed": true, 00:03:59.818 "claim_type": "exclusive_write", 00:03:59.818 "zoned": false, 00:03:59.818 "supported_io_types": { 00:03:59.818 "read": true, 00:03:59.818 "write": true, 00:03:59.818 "unmap": true, 00:03:59.818 "write_zeroes": true, 00:03:59.818 "flush": true, 00:03:59.818 "reset": true, 00:03:59.818 "compare": false, 00:03:59.818 "compare_and_write": false, 00:03:59.818 "abort": true, 00:03:59.818 "nvme_admin": false, 00:03:59.818 "nvme_io": false 00:03:59.818 }, 00:03:59.818 "memory_domains": [ 00:03:59.818 { 00:03:59.818 "dma_device_id": "system", 00:03:59.818 "dma_device_type": 1 00:03:59.818 }, 00:03:59.818 { 00:03:59.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.818 "dma_device_type": 2 00:03:59.818 } 00:03:59.818 ], 00:03:59.818 "driver_specific": {} 00:03:59.818 }, 00:03:59.818 { 00:03:59.818 "name": "Passthru0", 00:03:59.818 "aliases": [ 00:03:59.818 "ab3120b8-4b35-5bfd-92ef-b2ba93da20b6" 00:03:59.818 ], 00:03:59.818 "product_name": "passthru", 00:03:59.818 "block_size": 512, 00:03:59.818 "num_blocks": 16384, 00:03:59.818 "uuid": "ab3120b8-4b35-5bfd-92ef-b2ba93da20b6", 00:03:59.818 "assigned_rate_limits": { 00:03:59.818 "rw_ios_per_sec": 0, 00:03:59.818 "rw_mbytes_per_sec": 0, 00:03:59.818 "r_mbytes_per_sec": 0, 00:03:59.818 "w_mbytes_per_sec": 0 00:03:59.818 }, 00:03:59.818 "claimed": false, 00:03:59.818 "zoned": false, 00:03:59.818 "supported_io_types": { 00:03:59.818 "read": true, 00:03:59.818 "write": true, 00:03:59.818 "unmap": true, 00:03:59.818 "write_zeroes": true, 00:03:59.818 "flush": true, 00:03:59.818 "reset": true, 00:03:59.818 "compare": false, 00:03:59.818 "compare_and_write": false, 00:03:59.818 "abort": true, 00:03:59.818 "nvme_admin": false, 00:03:59.818 "nvme_io": false 00:03:59.818 }, 00:03:59.818 "memory_domains": [ 00:03:59.818 { 00:03:59.818 "dma_device_id": "system", 00:03:59.818 "dma_device_type": 1 00:03:59.818 }, 00:03:59.818 { 00:03:59.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.818 "dma_device_type": 2 00:03:59.818 } 00:03:59.818 ], 00:03:59.818 "driver_specific": { 00:03:59.818 "passthru": { 00:03:59.818 "name": "Passthru0", 00:03:59.818 "base_bdev_name": "Malloc2" 00:03:59.818 } 00:03:59.818 } 00:03:59.818 } 00:03:59.818 ]' 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.818 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.078 00:04:00.078 real 0m0.232s 00:04:00.078 user 0m0.133s 00:04:00.078 sys 0m0.032s 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:00.078 10:23:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.078 ************************************ 00:04:00.078 END TEST rpc_daemon_integrity 00:04:00.078 ************************************ 00:04:00.078 10:23:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:00.078 10:23:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2465763 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@947 -- # '[' -z 2465763 ']' 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@951 -- # kill -0 2465763 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@952 -- # uname 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2465763 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2465763' 00:04:00.078 killing process with pid 2465763 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@966 -- # kill 2465763 00:04:00.078 10:23:15 rpc -- common/autotest_common.sh@971 -- # wait 2465763 00:04:01.018 00:04:01.018 real 0m2.803s 00:04:01.018 user 0m3.220s 00:04:01.018 sys 0m0.760s 00:04:01.018 10:23:16 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:01.018 10:23:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.018 ************************************ 00:04:01.018 END TEST rpc 00:04:01.018 ************************************ 00:04:01.018 10:23:16 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.018 10:23:16 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:01.018 10:23:16 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:01.018 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:04:01.018 ************************************ 00:04:01.018 START TEST skip_rpc 00:04:01.018 ************************************ 00:04:01.018 10:23:16 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.018 * Looking for test storage... 00:04:01.018 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:01.018 10:23:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:01.018 10:23:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:01.018 10:23:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.018 10:23:16 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:01.018 10:23:16 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:01.018 10:23:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.018 ************************************ 00:04:01.018 START TEST skip_rpc 00:04:01.018 ************************************ 00:04:01.018 10:23:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:01.018 10:23:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2466533 00:04:01.018 10:23:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.018 10:23:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.018 10:23:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.279 [2024-05-15 10:23:16.940026] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:01.279 [2024-05-15 10:23:16.940143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466533 ] 00:04:01.279 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.279 [2024-05-15 10:23:17.058327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.279 [2024-05-15 10:23:17.150364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2466533 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 2466533 ']' 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 2466533 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2466533 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2466533' 00:04:06.566 killing process with pid 2466533 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 2466533 00:04:06.566 10:23:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 2466533 00:04:07.136 00:04:07.136 real 0m5.868s 00:04:07.136 user 0m5.548s 00:04:07.136 sys 0m0.330s 00:04:07.136 10:23:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:07.136 10:23:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.136 ************************************ 00:04:07.136 END TEST skip_rpc 00:04:07.136 ************************************ 00:04:07.136 10:23:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.136 10:23:22 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:07.136 10:23:22 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:07.136 10:23:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.136 ************************************ 00:04:07.136 START TEST skip_rpc_with_json 00:04:07.136 ************************************ 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2467744 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2467744 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 2467744 ']' 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.136 10:23:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.136 [2024-05-15 10:23:22.893805] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:07.137 [2024-05-15 10:23:22.893937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467744 ] 00:04:07.137 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.397 [2024-05-15 10:23:23.026392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.398 [2024-05-15 10:23:23.119206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.971 [2024-05-15 10:23:23.577876] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.971 request: 00:04:07.971 { 00:04:07.971 "trtype": "tcp", 00:04:07.971 "method": "nvmf_get_transports", 00:04:07.971 "req_id": 1 00:04:07.971 } 00:04:07.971 Got JSON-RPC error response 00:04:07.971 response: 00:04:07.971 { 00:04:07.971 "code": -19, 00:04:07.971 "message": "No such device" 00:04:07.971 } 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.971 [2024-05-15 10:23:23.585964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:07.971 10:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:07.971 { 00:04:07.971 "subsystems": [ 00:04:07.971 { 00:04:07.971 "subsystem": "keyring", 00:04:07.971 "config": [] 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "subsystem": "iobuf", 00:04:07.971 "config": [ 00:04:07.971 { 00:04:07.971 "method": "iobuf_set_options", 00:04:07.971 "params": { 00:04:07.971 "small_pool_count": 8192, 00:04:07.971 "large_pool_count": 1024, 00:04:07.971 "small_bufsize": 8192, 00:04:07.971 "large_bufsize": 135168 00:04:07.971 } 00:04:07.971 } 00:04:07.971 ] 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "subsystem": "sock", 00:04:07.971 "config": [ 00:04:07.971 { 00:04:07.971 "method": "sock_impl_set_options", 00:04:07.971 "params": { 00:04:07.971 "impl_name": "posix", 00:04:07.971 "recv_buf_size": 2097152, 00:04:07.971 "send_buf_size": 2097152, 00:04:07.971 "enable_recv_pipe": true, 00:04:07.971 "enable_quickack": false, 00:04:07.971 "enable_placement_id": 0, 00:04:07.971 "enable_zerocopy_send_server": true, 00:04:07.971 "enable_zerocopy_send_client": false, 00:04:07.971 "zerocopy_threshold": 0, 00:04:07.971 "tls_version": 0, 00:04:07.971 "enable_ktls": false 00:04:07.971 } 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "method": "sock_impl_set_options", 00:04:07.971 "params": { 00:04:07.971 "impl_name": "ssl", 00:04:07.971 "recv_buf_size": 4096, 00:04:07.971 "send_buf_size": 4096, 00:04:07.971 "enable_recv_pipe": true, 00:04:07.971 "enable_quickack": false, 00:04:07.971 "enable_placement_id": 0, 00:04:07.971 "enable_zerocopy_send_server": true, 00:04:07.971 "enable_zerocopy_send_client": false, 00:04:07.971 "zerocopy_threshold": 0, 00:04:07.971 "tls_version": 0, 00:04:07.971 "enable_ktls": false 00:04:07.971 } 00:04:07.971 } 00:04:07.971 ] 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "subsystem": "vmd", 00:04:07.971 "config": [] 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "subsystem": "accel", 00:04:07.971 "config": [ 00:04:07.971 { 00:04:07.971 "method": "accel_set_options", 00:04:07.971 "params": { 00:04:07.971 "small_cache_size": 128, 00:04:07.971 "large_cache_size": 16, 00:04:07.971 "task_count": 2048, 00:04:07.971 "sequence_count": 2048, 00:04:07.971 "buf_count": 2048 00:04:07.971 } 00:04:07.971 } 00:04:07.971 ] 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "subsystem": "bdev", 00:04:07.971 "config": [ 00:04:07.971 { 00:04:07.971 "method": "bdev_set_options", 00:04:07.971 "params": { 00:04:07.971 "bdev_io_pool_size": 65535, 00:04:07.971 "bdev_io_cache_size": 256, 00:04:07.971 "bdev_auto_examine": true, 00:04:07.971 "iobuf_small_cache_size": 128, 00:04:07.971 "iobuf_large_cache_size": 16 00:04:07.971 } 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "method": "bdev_raid_set_options", 00:04:07.971 "params": { 00:04:07.971 "process_window_size_kb": 1024 00:04:07.971 } 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "method": "bdev_iscsi_set_options", 00:04:07.971 "params": { 00:04:07.971 "timeout_sec": 30 00:04:07.971 } 00:04:07.971 }, 00:04:07.971 { 00:04:07.971 "method": "bdev_nvme_set_options", 00:04:07.971 "params": { 00:04:07.971 "action_on_timeout": "none", 00:04:07.971 "timeout_us": 0, 00:04:07.971 "timeout_admin_us": 0, 00:04:07.971 "keep_alive_timeout_ms": 10000, 00:04:07.971 "arbitration_burst": 0, 00:04:07.971 "low_priority_weight": 0, 00:04:07.971 "medium_priority_weight": 0, 00:04:07.971 "high_priority_weight": 0, 00:04:07.971 "nvme_adminq_poll_period_us": 10000, 00:04:07.971 "nvme_ioq_poll_period_us": 0, 00:04:07.971 "io_queue_requests": 0, 00:04:07.971 "delay_cmd_submit": true, 00:04:07.971 "transport_retry_count": 4, 00:04:07.971 "bdev_retry_count": 3, 00:04:07.971 "transport_ack_timeout": 0, 00:04:07.971 "ctrlr_loss_timeout_sec": 0, 00:04:07.971 "reconnect_delay_sec": 0, 00:04:07.971 "fast_io_fail_timeout_sec": 0, 00:04:07.971 "disable_auto_failback": false, 00:04:07.971 "generate_uuids": false, 00:04:07.971 "transport_tos": 0, 00:04:07.971 "nvme_error_stat": false, 00:04:07.971 "rdma_srq_size": 0, 00:04:07.971 "io_path_stat": false, 00:04:07.972 "allow_accel_sequence": false, 00:04:07.972 "rdma_max_cq_size": 0, 00:04:07.972 "rdma_cm_event_timeout_ms": 0, 00:04:07.972 "dhchap_digests": [ 00:04:07.972 "sha256", 00:04:07.972 "sha384", 00:04:07.972 "sha512" 00:04:07.972 ], 00:04:07.972 "dhchap_dhgroups": [ 00:04:07.972 "null", 00:04:07.972 "ffdhe2048", 00:04:07.972 "ffdhe3072", 00:04:07.972 "ffdhe4096", 00:04:07.972 "ffdhe6144", 00:04:07.972 "ffdhe8192" 00:04:07.972 ] 00:04:07.972 } 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "method": "bdev_nvme_set_hotplug", 00:04:07.972 "params": { 00:04:07.972 "period_us": 100000, 00:04:07.972 "enable": false 00:04:07.972 } 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "method": "bdev_wait_for_examine" 00:04:07.972 } 00:04:07.972 ] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "scsi", 00:04:07.972 "config": null 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "scheduler", 00:04:07.972 "config": [ 00:04:07.972 { 00:04:07.972 "method": "framework_set_scheduler", 00:04:07.972 "params": { 00:04:07.972 "name": "static" 00:04:07.972 } 00:04:07.972 } 00:04:07.972 ] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "vhost_scsi", 00:04:07.972 "config": [] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "vhost_blk", 00:04:07.972 "config": [] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "ublk", 00:04:07.972 "config": [] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "nbd", 00:04:07.972 "config": [] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "nvmf", 00:04:07.972 "config": [ 00:04:07.972 { 00:04:07.972 "method": "nvmf_set_config", 00:04:07.972 "params": { 00:04:07.972 "discovery_filter": "match_any", 00:04:07.972 "admin_cmd_passthru": { 00:04:07.972 "identify_ctrlr": false 00:04:07.972 } 00:04:07.972 } 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "method": "nvmf_set_max_subsystems", 00:04:07.972 "params": { 00:04:07.972 "max_subsystems": 1024 00:04:07.972 } 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "method": "nvmf_set_crdt", 00:04:07.972 "params": { 00:04:07.972 "crdt1": 0, 00:04:07.972 "crdt2": 0, 00:04:07.972 "crdt3": 0 00:04:07.972 } 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "method": "nvmf_create_transport", 00:04:07.972 "params": { 00:04:07.972 "trtype": "TCP", 00:04:07.972 "max_queue_depth": 128, 00:04:07.972 "max_io_qpairs_per_ctrlr": 127, 00:04:07.972 "in_capsule_data_size": 4096, 00:04:07.972 "max_io_size": 131072, 00:04:07.972 "io_unit_size": 131072, 00:04:07.972 "max_aq_depth": 128, 00:04:07.972 "num_shared_buffers": 511, 00:04:07.972 "buf_cache_size": 4294967295, 00:04:07.972 "dif_insert_or_strip": false, 00:04:07.972 "zcopy": false, 00:04:07.972 "c2h_success": true, 00:04:07.972 "sock_priority": 0, 00:04:07.972 "abort_timeout_sec": 1, 00:04:07.972 "ack_timeout": 0, 00:04:07.972 "data_wr_pool_size": 0 00:04:07.972 } 00:04:07.972 } 00:04:07.972 ] 00:04:07.972 }, 00:04:07.972 { 00:04:07.972 "subsystem": "iscsi", 00:04:07.972 "config": [ 00:04:07.972 { 00:04:07.972 "method": "iscsi_set_options", 00:04:07.972 "params": { 00:04:07.972 "node_base": "iqn.2016-06.io.spdk", 00:04:07.972 "max_sessions": 128, 00:04:07.972 "max_connections_per_session": 2, 00:04:07.972 "max_queue_depth": 64, 00:04:07.972 "default_time2wait": 2, 00:04:07.972 "default_time2retain": 20, 00:04:07.972 "first_burst_length": 8192, 00:04:07.972 "immediate_data": true, 00:04:07.972 "allow_duplicated_isid": false, 00:04:07.972 "error_recovery_level": 0, 00:04:07.972 "nop_timeout": 60, 00:04:07.972 "nop_in_interval": 30, 00:04:07.972 "disable_chap": false, 00:04:07.972 "require_chap": false, 00:04:07.972 "mutual_chap": false, 00:04:07.972 "chap_group": 0, 00:04:07.972 "max_large_datain_per_connection": 64, 00:04:07.972 "max_r2t_per_connection": 4, 00:04:07.972 "pdu_pool_size": 36864, 00:04:07.972 "immediate_data_pool_size": 16384, 00:04:07.972 "data_out_pool_size": 2048 00:04:07.972 } 00:04:07.972 } 00:04:07.972 ] 00:04:07.972 } 00:04:07.972 ] 00:04:07.972 } 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2467744 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2467744 ']' 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2467744 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2467744 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2467744' 00:04:07.972 killing process with pid 2467744 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2467744 00:04:07.972 10:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2467744 00:04:08.990 10:23:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2468058 00:04:08.990 10:23:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:08.990 10:23:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2468058 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2468058 ']' 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2468058 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2468058 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:14.273 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2468058' 00:04:14.273 killing process with pid 2468058 00:04:14.274 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2468058 00:04:14.274 10:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2468058 00:04:14.843 10:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:14.843 10:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:14.843 00:04:14.843 real 0m7.729s 00:04:14.843 user 0m7.323s 00:04:14.843 sys 0m0.715s 00:04:14.843 10:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:14.843 10:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.844 ************************************ 00:04:14.844 END TEST skip_rpc_with_json 00:04:14.844 ************************************ 00:04:14.844 10:23:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:14.844 10:23:30 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:14.844 10:23:30 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:14.844 10:23:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.844 ************************************ 00:04:14.844 START TEST skip_rpc_with_delay 00:04:14.844 ************************************ 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.844 [2024-05-15 10:23:30.674500] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:14.844 [2024-05-15 10:23:30.674638] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:14.844 00:04:14.844 real 0m0.126s 00:04:14.844 user 0m0.067s 00:04:14.844 sys 0m0.058s 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:14.844 10:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:14.844 ************************************ 00:04:14.844 END TEST skip_rpc_with_delay 00:04:14.844 ************************************ 00:04:15.104 10:23:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:15.105 10:23:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:15.105 10:23:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:15.105 10:23:30 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:15.105 10:23:30 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:15.105 10:23:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.105 ************************************ 00:04:15.105 START TEST exit_on_failed_rpc_init 00:04:15.105 ************************************ 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2469291 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2469291 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 2469291 ']' 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.105 10:23:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.105 [2024-05-15 10:23:30.860625] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:15.105 [2024-05-15 10:23:30.860738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469291 ] 00:04:15.105 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.365 [2024-05-15 10:23:30.978706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.365 [2024-05-15 10:23:31.075186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:15.935 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.935 [2024-05-15 10:23:31.590652] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:15.935 [2024-05-15 10:23:31.590740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469325 ] 00:04:15.935 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.935 [2024-05-15 10:23:31.681508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.935 [2024-05-15 10:23:31.779860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.935 [2024-05-15 10:23:31.779951] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:15.935 [2024-05-15 10:23:31.779967] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:15.935 [2024-05-15 10:23:31.779977] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2469291 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 2469291 ']' 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 2469291 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2469291 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2469291' 00:04:16.195 killing process with pid 2469291 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 2469291 00:04:16.195 10:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 2469291 00:04:17.141 00:04:17.141 real 0m2.072s 00:04:17.141 user 0m2.271s 00:04:17.141 sys 0m0.490s 00:04:17.141 10:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.141 10:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.141 ************************************ 00:04:17.141 END TEST exit_on_failed_rpc_init 00:04:17.141 ************************************ 00:04:17.141 10:23:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:17.141 00:04:17.141 real 0m16.124s 00:04:17.141 user 0m15.319s 00:04:17.141 sys 0m1.822s 00:04:17.141 10:23:32 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.141 10:23:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.141 ************************************ 00:04:17.141 END TEST skip_rpc 00:04:17.141 ************************************ 00:04:17.141 10:23:32 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.141 10:23:32 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:17.141 10:23:32 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:17.141 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.141 ************************************ 00:04:17.141 START TEST rpc_client 00:04:17.141 ************************************ 00:04:17.141 10:23:32 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.402 * Looking for test storage... 00:04:17.402 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:17.402 10:23:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:17.402 OK 00:04:17.402 10:23:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.402 00:04:17.402 real 0m0.114s 00:04:17.402 user 0m0.045s 00:04:17.402 sys 0m0.073s 00:04:17.402 10:23:33 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.402 10:23:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.402 ************************************ 00:04:17.402 END TEST rpc_client 00:04:17.402 ************************************ 00:04:17.402 10:23:33 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.402 10:23:33 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:17.402 10:23:33 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:17.402 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:04:17.402 ************************************ 00:04:17.402 START TEST json_config 00:04:17.402 ************************************ 00:04:17.402 10:23:33 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.402 10:23:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:17.402 10:23:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:17.403 10:23:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.403 10:23:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.403 10:23:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.403 10:23:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.403 10:23:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.403 10:23:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.403 10:23:33 json_config -- paths/export.sh@5 -- # export PATH 00:04:17.403 10:23:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@47 -- # : 0 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:17.403 10:23:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:17.403 INFO: JSON configuration test init 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.403 10:23:33 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:17.403 10:23:33 json_config -- json_config/common.sh@9 -- # local app=target 00:04:17.403 10:23:33 json_config -- json_config/common.sh@10 -- # shift 00:04:17.403 10:23:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:17.403 10:23:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:17.403 10:23:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:17.403 10:23:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.403 10:23:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.403 10:23:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2469779 00:04:17.403 10:23:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:17.403 Waiting for target to run... 00:04:17.403 10:23:33 json_config -- json_config/common.sh@25 -- # waitforlisten 2469779 /var/tmp/spdk_tgt.sock 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@828 -- # '[' -z 2469779 ']' 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:17.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:17.403 10:23:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.403 10:23:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:17.662 [2024-05-15 10:23:33.300881] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:17.662 [2024-05-15 10:23:33.301003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469779 ] 00:04:17.662 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.920 [2024-05-15 10:23:33.605847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.920 [2024-05-15 10:23:33.685507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.490 10:23:34 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:18.490 10:23:34 json_config -- common/autotest_common.sh@861 -- # return 0 00:04:18.490 10:23:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.490 00:04:18.490 10:23:34 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:18.490 10:23:34 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:18.490 10:23:34 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:18.490 10:23:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.490 10:23:34 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:18.490 10:23:34 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:18.490 10:23:34 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:18.490 10:23:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.490 10:23:34 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:18.490 10:23:34 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:18.490 10:23:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:19.432 10:23:35 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:19.432 10:23:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:19.432 10:23:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:19.432 10:23:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:19.697 10:23:35 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:19.697 10:23:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:19.697 10:23:35 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:19.697 10:23:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:19.697 10:23:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.697 10:23:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.697 MallocForNvmf0 00:04:19.960 10:23:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:19.960 10:23:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:19.960 MallocForNvmf1 00:04:19.960 10:23:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:19.960 10:23:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.220 [2024-05-15 10:23:35.860865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.220 10:23:35 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.220 10:23:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.220 10:23:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:20.220 10:23:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:20.480 10:23:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:20.480 10:23:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:20.480 10:23:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:20.480 10:23:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:20.741 [2024-05-15 10:23:36.461038] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:20.741 [2024-05-15 10:23:36.461451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.741 10:23:36 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:20.741 10:23:36 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:20.741 10:23:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.741 10:23:36 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:20.741 10:23:36 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:20.741 10:23:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.741 10:23:36 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:20.741 10:23:36 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:20.741 10:23:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:21.002 MallocBdevForConfigChangeCheck 00:04:21.002 10:23:36 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:21.002 10:23:36 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:21.002 10:23:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.002 10:23:36 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:21.002 10:23:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.262 10:23:37 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:21.262 INFO: shutting down applications... 00:04:21.262 10:23:37 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:21.262 10:23:37 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:21.262 10:23:37 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:21.262 10:23:37 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:23.172 Calling clear_iscsi_subsystem 00:04:23.172 Calling clear_nvmf_subsystem 00:04:23.172 Calling clear_nbd_subsystem 00:04:23.172 Calling clear_ublk_subsystem 00:04:23.172 Calling clear_vhost_blk_subsystem 00:04:23.172 Calling clear_vhost_scsi_subsystem 00:04:23.172 Calling clear_bdev_subsystem 00:04:23.172 10:23:38 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:23.172 10:23:38 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:23.172 10:23:38 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:23.172 10:23:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:23.172 10:23:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.172 10:23:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:23.741 10:23:39 json_config -- json_config/json_config.sh@345 -- # break 00:04:23.741 10:23:39 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:23.741 10:23:39 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:23.741 10:23:39 json_config -- json_config/common.sh@31 -- # local app=target 00:04:23.741 10:23:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.741 10:23:39 json_config -- json_config/common.sh@35 -- # [[ -n 2469779 ]] 00:04:23.741 10:23:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2469779 00:04:23.741 [2024-05-15 10:23:39.327208] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:23.741 10:23:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.741 10:23:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.741 10:23:39 json_config -- json_config/common.sh@41 -- # kill -0 2469779 00:04:23.741 10:23:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.001 10:23:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.001 10:23:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.001 10:23:39 json_config -- json_config/common.sh@41 -- # kill -0 2469779 00:04:24.001 10:23:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.001 10:23:39 json_config -- json_config/common.sh@43 -- # break 00:04:24.001 10:23:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.001 10:23:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.001 SPDK target shutdown done 00:04:24.001 10:23:39 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:24.001 INFO: relaunching applications... 00:04:24.001 10:23:39 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.001 10:23:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.001 10:23:39 json_config -- json_config/common.sh@10 -- # shift 00:04:24.001 10:23:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.001 10:23:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.001 10:23:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.001 10:23:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.001 10:23:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.001 10:23:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2471330 00:04:24.001 10:23:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.001 Waiting for target to run... 00:04:24.001 10:23:39 json_config -- json_config/common.sh@25 -- # waitforlisten 2471330 /var/tmp/spdk_tgt.sock 00:04:24.001 10:23:39 json_config -- common/autotest_common.sh@828 -- # '[' -z 2471330 ']' 00:04:24.001 10:23:39 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.001 10:23:39 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:24.001 10:23:39 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.001 10:23:39 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:24.001 10:23:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.001 10:23:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.262 [2024-05-15 10:23:39.938221] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:24.262 [2024-05-15 10:23:39.938367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471330 ] 00:04:24.262 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.832 [2024-05-15 10:23:40.439353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.832 [2024-05-15 10:23:40.529526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.773 [2024-05-15 10:23:41.628795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.034 [2024-05-15 10:23:41.660707] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:26.034 [2024-05-15 10:23:41.661115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.034 10:23:41 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:26.034 10:23:41 json_config -- common/autotest_common.sh@861 -- # return 0 00:04:26.034 10:23:41 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.034 00:04:26.034 10:23:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:26.034 10:23:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:26.034 INFO: Checking if target configuration is the same... 00:04:26.034 10:23:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.034 10:23:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:26.034 10:23:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.034 + '[' 2 -ne 2 ']' 00:04:26.034 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.034 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:26.034 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:26.034 +++ basename /dev/fd/62 00:04:26.034 ++ mktemp /tmp/62.XXX 00:04:26.034 + tmp_file_1=/tmp/62.mId 00:04:26.034 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.034 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.034 + tmp_file_2=/tmp/spdk_tgt_config.json.yRF 00:04:26.034 + ret=0 00:04:26.034 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.295 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.295 + diff -u /tmp/62.mId /tmp/spdk_tgt_config.json.yRF 00:04:26.295 + echo 'INFO: JSON config files are the same' 00:04:26.295 INFO: JSON config files are the same 00:04:26.295 + rm /tmp/62.mId /tmp/spdk_tgt_config.json.yRF 00:04:26.295 + exit 0 00:04:26.295 10:23:42 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:26.295 10:23:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:26.295 INFO: changing configuration and checking if this can be detected... 00:04:26.295 10:23:42 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.295 10:23:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.556 10:23:42 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.556 10:23:42 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:26.556 10:23:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.556 + '[' 2 -ne 2 ']' 00:04:26.556 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.556 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:26.556 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:26.556 +++ basename /dev/fd/62 00:04:26.556 ++ mktemp /tmp/62.XXX 00:04:26.556 + tmp_file_1=/tmp/62.jUX 00:04:26.556 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.556 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.556 + tmp_file_2=/tmp/spdk_tgt_config.json.M6X 00:04:26.556 + ret=0 00:04:26.556 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.816 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.816 + diff -u /tmp/62.jUX /tmp/spdk_tgt_config.json.M6X 00:04:26.816 + ret=1 00:04:26.816 + echo '=== Start of file: /tmp/62.jUX ===' 00:04:26.816 + cat /tmp/62.jUX 00:04:26.816 + echo '=== End of file: /tmp/62.jUX ===' 00:04:26.816 + echo '' 00:04:26.816 + echo '=== Start of file: /tmp/spdk_tgt_config.json.M6X ===' 00:04:26.816 + cat /tmp/spdk_tgt_config.json.M6X 00:04:26.816 + echo '=== End of file: /tmp/spdk_tgt_config.json.M6X ===' 00:04:26.816 + echo '' 00:04:26.816 + rm /tmp/62.jUX /tmp/spdk_tgt_config.json.M6X 00:04:26.816 + exit 1 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:26.816 INFO: configuration change detected. 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 2471330 ]] 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.816 10:23:42 json_config -- json_config/json_config.sh@323 -- # killprocess 2471330 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@947 -- # '[' -z 2471330 ']' 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@951 -- # kill -0 2471330 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@952 -- # uname 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2471330 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2471330' 00:04:26.816 killing process with pid 2471330 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@966 -- # kill 2471330 00:04:26.816 [2024-05-15 10:23:42.611626] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:26.816 10:23:42 json_config -- common/autotest_common.sh@971 -- # wait 2471330 00:04:28.199 10:23:43 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.199 10:23:43 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:28.199 10:23:43 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:28.199 10:23:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.199 10:23:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:28.199 10:23:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:28.199 INFO: Success 00:04:28.199 00:04:28.199 real 0m10.892s 00:04:28.199 user 0m11.505s 00:04:28.199 sys 0m2.179s 00:04:28.199 10:23:44 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:28.199 10:23:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.199 ************************************ 00:04:28.199 END TEST json_config 00:04:28.199 ************************************ 00:04:28.199 10:23:44 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.199 10:23:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:28.199 10:23:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:28.199 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.459 ************************************ 00:04:28.459 START TEST json_config_extra_key 00:04:28.459 ************************************ 00:04:28.459 10:23:44 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:28.459 10:23:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.459 10:23:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.459 10:23:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.459 10:23:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.459 10:23:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.459 10:23:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.459 10:23:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:28.459 10:23:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:28.459 10:23:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:28.459 INFO: launching applications... 00:04:28.459 10:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2472326 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.459 Waiting for target to run... 00:04:28.459 10:23:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2472326 /var/tmp/spdk_tgt.sock 00:04:28.459 10:23:44 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 2472326 ']' 00:04:28.460 10:23:44 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.460 10:23:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:28.460 10:23:44 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.460 10:23:44 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:28.460 10:23:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.460 10:23:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.460 [2024-05-15 10:23:44.274645] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:28.460 [2024-05-15 10:23:44.274782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472326 ] 00:04:28.719 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.979 [2024-05-15 10:23:44.770098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.281 [2024-05-15 10:23:44.860481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.281 10:23:45 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:29.281 10:23:45 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:29.281 00:04:29.281 10:23:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:29.281 INFO: shutting down applications... 00:04:29.281 10:23:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2472326 ]] 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2472326 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2472326 00:04:29.281 10:23:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.851 10:23:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.851 10:23:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.851 10:23:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2472326 00:04:29.851 10:23:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.421 10:23:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.422 10:23:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.422 10:23:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2472326 00:04:30.422 10:23:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.422 10:23:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:30.422 10:23:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.422 10:23:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.422 SPDK target shutdown done 00:04:30.422 10:23:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:30.422 Success 00:04:30.422 00:04:30.422 real 0m2.049s 00:04:30.422 user 0m1.658s 00:04:30.422 sys 0m0.679s 00:04:30.422 10:23:46 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:30.422 10:23:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.422 ************************************ 00:04:30.422 END TEST json_config_extra_key 00:04:30.422 ************************************ 00:04:30.422 10:23:46 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.422 10:23:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:30.422 10:23:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:30.422 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.422 ************************************ 00:04:30.422 START TEST alias_rpc 00:04:30.422 ************************************ 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.422 * Looking for test storage... 00:04:30.422 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:04:30.422 10:23:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.422 10:23:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2472685 00:04:30.422 10:23:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2472685 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 2472685 ']' 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:30.422 10:23:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.422 10:23:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.683 [2024-05-15 10:23:46.385941] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:30.683 [2024-05-15 10:23:46.386100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472685 ] 00:04:30.683 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.683 [2024-05-15 10:23:46.515107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.943 [2024-05-15 10:23:46.608446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:31.513 10:23:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:31.513 10:23:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2472685 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 2472685 ']' 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 2472685 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:31.513 10:23:47 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2472685 00:04:31.773 10:23:47 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:31.773 10:23:47 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:31.773 10:23:47 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2472685' 00:04:31.773 killing process with pid 2472685 00:04:31.773 10:23:47 alias_rpc -- common/autotest_common.sh@966 -- # kill 2472685 00:04:31.773 10:23:47 alias_rpc -- common/autotest_common.sh@971 -- # wait 2472685 00:04:32.713 00:04:32.713 real 0m2.009s 00:04:32.713 user 0m2.055s 00:04:32.713 sys 0m0.495s 00:04:32.713 10:23:48 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:32.713 10:23:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.713 ************************************ 00:04:32.713 END TEST alias_rpc 00:04:32.713 ************************************ 00:04:32.713 10:23:48 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:32.713 10:23:48 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:32.713 10:23:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.713 10:23:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.713 10:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.713 ************************************ 00:04:32.713 START TEST spdkcli_tcp 00:04:32.713 ************************************ 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:32.713 * Looking for test storage... 00:04:32.713 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2473064 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2473064 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 2473064 ']' 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:32.713 10:23:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:32.713 10:23:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.713 [2024-05-15 10:23:48.456658] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:32.713 [2024-05-15 10:23:48.456783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473064 ] 00:04:32.713 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.713 [2024-05-15 10:23:48.572789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.973 [2024-05-15 10:23:48.667751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.973 [2024-05-15 10:23:48.667754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.544 10:23:49 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:33.544 10:23:49 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:04:33.544 10:23:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2473339 00:04:33.544 10:23:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:33.544 10:23:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:33.544 [ 00:04:33.544 "bdev_malloc_delete", 00:04:33.544 "bdev_malloc_create", 00:04:33.544 "bdev_null_resize", 00:04:33.544 "bdev_null_delete", 00:04:33.544 "bdev_null_create", 00:04:33.544 "bdev_nvme_cuse_unregister", 00:04:33.544 "bdev_nvme_cuse_register", 00:04:33.544 "bdev_opal_new_user", 00:04:33.544 "bdev_opal_set_lock_state", 00:04:33.544 "bdev_opal_delete", 00:04:33.544 "bdev_opal_get_info", 00:04:33.544 "bdev_opal_create", 00:04:33.544 "bdev_nvme_opal_revert", 00:04:33.544 "bdev_nvme_opal_init", 00:04:33.544 "bdev_nvme_send_cmd", 00:04:33.544 "bdev_nvme_get_path_iostat", 00:04:33.544 "bdev_nvme_get_mdns_discovery_info", 00:04:33.544 "bdev_nvme_stop_mdns_discovery", 00:04:33.544 "bdev_nvme_start_mdns_discovery", 00:04:33.544 "bdev_nvme_set_multipath_policy", 00:04:33.544 "bdev_nvme_set_preferred_path", 00:04:33.544 "bdev_nvme_get_io_paths", 00:04:33.544 "bdev_nvme_remove_error_injection", 00:04:33.544 "bdev_nvme_add_error_injection", 00:04:33.544 "bdev_nvme_get_discovery_info", 00:04:33.544 "bdev_nvme_stop_discovery", 00:04:33.544 "bdev_nvme_start_discovery", 00:04:33.544 "bdev_nvme_get_controller_health_info", 00:04:33.544 "bdev_nvme_disable_controller", 00:04:33.544 "bdev_nvme_enable_controller", 00:04:33.544 "bdev_nvme_reset_controller", 00:04:33.544 "bdev_nvme_get_transport_statistics", 00:04:33.544 "bdev_nvme_apply_firmware", 00:04:33.544 "bdev_nvme_detach_controller", 00:04:33.544 "bdev_nvme_get_controllers", 00:04:33.544 "bdev_nvme_attach_controller", 00:04:33.544 "bdev_nvme_set_hotplug", 00:04:33.544 "bdev_nvme_set_options", 00:04:33.544 "bdev_passthru_delete", 00:04:33.544 "bdev_passthru_create", 00:04:33.544 "bdev_lvol_check_shallow_copy", 00:04:33.544 "bdev_lvol_start_shallow_copy", 00:04:33.544 "bdev_lvol_grow_lvstore", 00:04:33.544 "bdev_lvol_get_lvols", 00:04:33.544 "bdev_lvol_get_lvstores", 00:04:33.544 "bdev_lvol_delete", 00:04:33.544 "bdev_lvol_set_read_only", 00:04:33.544 "bdev_lvol_resize", 00:04:33.544 "bdev_lvol_decouple_parent", 00:04:33.544 "bdev_lvol_inflate", 00:04:33.544 "bdev_lvol_rename", 00:04:33.544 "bdev_lvol_clone_bdev", 00:04:33.544 "bdev_lvol_clone", 00:04:33.544 "bdev_lvol_snapshot", 00:04:33.544 "bdev_lvol_create", 00:04:33.544 "bdev_lvol_delete_lvstore", 00:04:33.544 "bdev_lvol_rename_lvstore", 00:04:33.544 "bdev_lvol_create_lvstore", 00:04:33.544 "bdev_raid_set_options", 00:04:33.544 "bdev_raid_remove_base_bdev", 00:04:33.544 "bdev_raid_add_base_bdev", 00:04:33.544 "bdev_raid_delete", 00:04:33.544 "bdev_raid_create", 00:04:33.544 "bdev_raid_get_bdevs", 00:04:33.544 "bdev_error_inject_error", 00:04:33.544 "bdev_error_delete", 00:04:33.544 "bdev_error_create", 00:04:33.544 "bdev_split_delete", 00:04:33.544 "bdev_split_create", 00:04:33.544 "bdev_delay_delete", 00:04:33.544 "bdev_delay_create", 00:04:33.544 "bdev_delay_update_latency", 00:04:33.544 "bdev_zone_block_delete", 00:04:33.544 "bdev_zone_block_create", 00:04:33.544 "blobfs_create", 00:04:33.544 "blobfs_detect", 00:04:33.544 "blobfs_set_cache_size", 00:04:33.544 "bdev_aio_delete", 00:04:33.544 "bdev_aio_rescan", 00:04:33.544 "bdev_aio_create", 00:04:33.544 "bdev_ftl_set_property", 00:04:33.544 "bdev_ftl_get_properties", 00:04:33.544 "bdev_ftl_get_stats", 00:04:33.544 "bdev_ftl_unmap", 00:04:33.544 "bdev_ftl_unload", 00:04:33.544 "bdev_ftl_delete", 00:04:33.544 "bdev_ftl_load", 00:04:33.544 "bdev_ftl_create", 00:04:33.544 "bdev_virtio_attach_controller", 00:04:33.544 "bdev_virtio_scsi_get_devices", 00:04:33.544 "bdev_virtio_detach_controller", 00:04:33.544 "bdev_virtio_blk_set_hotplug", 00:04:33.544 "bdev_iscsi_delete", 00:04:33.544 "bdev_iscsi_create", 00:04:33.544 "bdev_iscsi_set_options", 00:04:33.544 "accel_error_inject_error", 00:04:33.544 "ioat_scan_accel_module", 00:04:33.544 "dsa_scan_accel_module", 00:04:33.544 "iaa_scan_accel_module", 00:04:33.544 "keyring_file_remove_key", 00:04:33.544 "keyring_file_add_key", 00:04:33.544 "iscsi_get_histogram", 00:04:33.544 "iscsi_enable_histogram", 00:04:33.544 "iscsi_set_options", 00:04:33.544 "iscsi_get_auth_groups", 00:04:33.544 "iscsi_auth_group_remove_secret", 00:04:33.544 "iscsi_auth_group_add_secret", 00:04:33.544 "iscsi_delete_auth_group", 00:04:33.544 "iscsi_create_auth_group", 00:04:33.544 "iscsi_set_discovery_auth", 00:04:33.544 "iscsi_get_options", 00:04:33.544 "iscsi_target_node_request_logout", 00:04:33.544 "iscsi_target_node_set_redirect", 00:04:33.544 "iscsi_target_node_set_auth", 00:04:33.544 "iscsi_target_node_add_lun", 00:04:33.544 "iscsi_get_stats", 00:04:33.544 "iscsi_get_connections", 00:04:33.544 "iscsi_portal_group_set_auth", 00:04:33.544 "iscsi_start_portal_group", 00:04:33.544 "iscsi_delete_portal_group", 00:04:33.544 "iscsi_create_portal_group", 00:04:33.544 "iscsi_get_portal_groups", 00:04:33.544 "iscsi_delete_target_node", 00:04:33.544 "iscsi_target_node_remove_pg_ig_maps", 00:04:33.544 "iscsi_target_node_add_pg_ig_maps", 00:04:33.544 "iscsi_create_target_node", 00:04:33.544 "iscsi_get_target_nodes", 00:04:33.544 "iscsi_delete_initiator_group", 00:04:33.544 "iscsi_initiator_group_remove_initiators", 00:04:33.544 "iscsi_initiator_group_add_initiators", 00:04:33.544 "iscsi_create_initiator_group", 00:04:33.544 "iscsi_get_initiator_groups", 00:04:33.544 "nvmf_set_crdt", 00:04:33.544 "nvmf_set_config", 00:04:33.544 "nvmf_set_max_subsystems", 00:04:33.544 "nvmf_stop_mdns_prr", 00:04:33.544 "nvmf_publish_mdns_prr", 00:04:33.544 "nvmf_subsystem_get_listeners", 00:04:33.544 "nvmf_subsystem_get_qpairs", 00:04:33.544 "nvmf_subsystem_get_controllers", 00:04:33.544 "nvmf_get_stats", 00:04:33.544 "nvmf_get_transports", 00:04:33.544 "nvmf_create_transport", 00:04:33.544 "nvmf_get_targets", 00:04:33.544 "nvmf_delete_target", 00:04:33.544 "nvmf_create_target", 00:04:33.544 "nvmf_subsystem_allow_any_host", 00:04:33.544 "nvmf_subsystem_remove_host", 00:04:33.544 "nvmf_subsystem_add_host", 00:04:33.544 "nvmf_ns_remove_host", 00:04:33.544 "nvmf_ns_add_host", 00:04:33.544 "nvmf_subsystem_remove_ns", 00:04:33.544 "nvmf_subsystem_add_ns", 00:04:33.545 "nvmf_subsystem_listener_set_ana_state", 00:04:33.545 "nvmf_discovery_get_referrals", 00:04:33.545 "nvmf_discovery_remove_referral", 00:04:33.545 "nvmf_discovery_add_referral", 00:04:33.545 "nvmf_subsystem_remove_listener", 00:04:33.545 "nvmf_subsystem_add_listener", 00:04:33.545 "nvmf_delete_subsystem", 00:04:33.545 "nvmf_create_subsystem", 00:04:33.545 "nvmf_get_subsystems", 00:04:33.545 "env_dpdk_get_mem_stats", 00:04:33.545 "nbd_get_disks", 00:04:33.545 "nbd_stop_disk", 00:04:33.545 "nbd_start_disk", 00:04:33.545 "ublk_recover_disk", 00:04:33.545 "ublk_get_disks", 00:04:33.545 "ublk_stop_disk", 00:04:33.545 "ublk_start_disk", 00:04:33.545 "ublk_destroy_target", 00:04:33.545 "ublk_create_target", 00:04:33.545 "virtio_blk_create_transport", 00:04:33.545 "virtio_blk_get_transports", 00:04:33.545 "vhost_controller_set_coalescing", 00:04:33.545 "vhost_get_controllers", 00:04:33.545 "vhost_delete_controller", 00:04:33.545 "vhost_create_blk_controller", 00:04:33.545 "vhost_scsi_controller_remove_target", 00:04:33.545 "vhost_scsi_controller_add_target", 00:04:33.545 "vhost_start_scsi_controller", 00:04:33.545 "vhost_create_scsi_controller", 00:04:33.545 "thread_set_cpumask", 00:04:33.545 "framework_get_scheduler", 00:04:33.545 "framework_set_scheduler", 00:04:33.545 "framework_get_reactors", 00:04:33.545 "thread_get_io_channels", 00:04:33.545 "thread_get_pollers", 00:04:33.545 "thread_get_stats", 00:04:33.545 "framework_monitor_context_switch", 00:04:33.545 "spdk_kill_instance", 00:04:33.545 "log_enable_timestamps", 00:04:33.545 "log_get_flags", 00:04:33.545 "log_clear_flag", 00:04:33.545 "log_set_flag", 00:04:33.545 "log_get_level", 00:04:33.545 "log_set_level", 00:04:33.545 "log_get_print_level", 00:04:33.545 "log_set_print_level", 00:04:33.545 "framework_enable_cpumask_locks", 00:04:33.545 "framework_disable_cpumask_locks", 00:04:33.545 "framework_wait_init", 00:04:33.545 "framework_start_init", 00:04:33.545 "scsi_get_devices", 00:04:33.545 "bdev_get_histogram", 00:04:33.545 "bdev_enable_histogram", 00:04:33.545 "bdev_set_qos_limit", 00:04:33.545 "bdev_set_qd_sampling_period", 00:04:33.545 "bdev_get_bdevs", 00:04:33.545 "bdev_reset_iostat", 00:04:33.545 "bdev_get_iostat", 00:04:33.545 "bdev_examine", 00:04:33.545 "bdev_wait_for_examine", 00:04:33.545 "bdev_set_options", 00:04:33.545 "notify_get_notifications", 00:04:33.545 "notify_get_types", 00:04:33.545 "accel_get_stats", 00:04:33.545 "accel_set_options", 00:04:33.545 "accel_set_driver", 00:04:33.545 "accel_crypto_key_destroy", 00:04:33.545 "accel_crypto_keys_get", 00:04:33.545 "accel_crypto_key_create", 00:04:33.545 "accel_assign_opc", 00:04:33.545 "accel_get_module_info", 00:04:33.545 "accel_get_opc_assignments", 00:04:33.545 "vmd_rescan", 00:04:33.545 "vmd_remove_device", 00:04:33.545 "vmd_enable", 00:04:33.545 "sock_get_default_impl", 00:04:33.545 "sock_set_default_impl", 00:04:33.545 "sock_impl_set_options", 00:04:33.545 "sock_impl_get_options", 00:04:33.545 "iobuf_get_stats", 00:04:33.545 "iobuf_set_options", 00:04:33.545 "framework_get_pci_devices", 00:04:33.545 "framework_get_config", 00:04:33.545 "framework_get_subsystems", 00:04:33.545 "trace_get_info", 00:04:33.545 "trace_get_tpoint_group_mask", 00:04:33.545 "trace_disable_tpoint_group", 00:04:33.545 "trace_enable_tpoint_group", 00:04:33.545 "trace_clear_tpoint_mask", 00:04:33.545 "trace_set_tpoint_mask", 00:04:33.545 "keyring_get_keys", 00:04:33.545 "spdk_get_version", 00:04:33.545 "rpc_get_methods" 00:04:33.545 ] 00:04:33.545 10:23:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.545 10:23:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:33.545 10:23:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2473064 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 2473064 ']' 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 2473064 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2473064 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2473064' 00:04:33.545 killing process with pid 2473064 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 2473064 00:04:33.545 10:23:49 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 2473064 00:04:34.487 00:04:34.487 real 0m1.892s 00:04:34.487 user 0m3.200s 00:04:34.487 sys 0m0.474s 00:04:34.487 10:23:50 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:34.487 10:23:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 ************************************ 00:04:34.487 END TEST spdkcli_tcp 00:04:34.487 ************************************ 00:04:34.487 10:23:50 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.487 10:23:50 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:34.487 10:23:50 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:34.487 10:23:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 ************************************ 00:04:34.487 START TEST dpdk_mem_utility 00:04:34.487 ************************************ 00:04:34.487 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.487 * Looking for test storage... 00:04:34.487 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:04:34.487 10:23:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:34.488 10:23:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2473694 00:04:34.488 10:23:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2473694 00:04:34.488 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 2473694 ']' 00:04:34.488 10:23:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.488 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.488 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:34.488 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.488 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:34.488 10:23:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.748 [2024-05-15 10:23:50.437283] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:34.748 [2024-05-15 10:23:50.437407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473694 ] 00:04:34.748 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.748 [2024-05-15 10:23:50.555659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.008 [2024-05-15 10:23:50.648891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.578 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:35.578 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:04:35.578 10:23:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:35.578 10:23:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:35.578 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:35.578 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.578 { 00:04:35.578 "filename": "/tmp/spdk_mem_dump.txt" 00:04:35.578 } 00:04:35.578 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:35.578 10:23:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:35.578 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:35.578 1 heaps totaling size 820.000000 MiB 00:04:35.578 size: 820.000000 MiB heap id: 0 00:04:35.578 end heaps---------- 00:04:35.578 8 mempools totaling size 598.116089 MiB 00:04:35.578 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:35.578 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:35.578 size: 84.521057 MiB name: bdev_io_2473694 00:04:35.578 size: 51.011292 MiB name: evtpool_2473694 00:04:35.578 size: 50.003479 MiB name: msgpool_2473694 00:04:35.578 size: 21.763794 MiB name: PDU_Pool 00:04:35.578 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:35.578 size: 0.026123 MiB name: Session_Pool 00:04:35.578 end mempools------- 00:04:35.578 6 memzones totaling size 4.142822 MiB 00:04:35.578 size: 1.000366 MiB name: RG_ring_0_2473694 00:04:35.578 size: 1.000366 MiB name: RG_ring_1_2473694 00:04:35.578 size: 1.000366 MiB name: RG_ring_4_2473694 00:04:35.578 size: 1.000366 MiB name: RG_ring_5_2473694 00:04:35.578 size: 0.125366 MiB name: RG_ring_2_2473694 00:04:35.578 size: 0.015991 MiB name: RG_ring_3_2473694 00:04:35.578 end memzones------- 00:04:35.578 10:23:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:35.578 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:04:35.578 list of free elements. size: 18.514832 MiB 00:04:35.578 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:35.578 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:35.578 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:35.578 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:35.578 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:35.578 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:35.578 element at address: 0x200019600000 with size: 0.999329 MiB 00:04:35.578 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:35.578 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:35.578 element at address: 0x200018e00000 with size: 0.959900 MiB 00:04:35.578 element at address: 0x200019900040 with size: 0.937256 MiB 00:04:35.578 element at address: 0x200000200000 with size: 0.840942 MiB 00:04:35.578 element at address: 0x20001b000000 with size: 0.583191 MiB 00:04:35.578 element at address: 0x200019200000 with size: 0.491150 MiB 00:04:35.578 element at address: 0x200019a00000 with size: 0.485657 MiB 00:04:35.578 element at address: 0x200013800000 with size: 0.470581 MiB 00:04:35.578 element at address: 0x200028400000 with size: 0.411072 MiB 00:04:35.578 element at address: 0x200003a00000 with size: 0.356140 MiB 00:04:35.578 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:04:35.578 list of standard malloc elements. size: 199.220764 MiB 00:04:35.578 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:35.578 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:35.578 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:35.578 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:35.578 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:35.578 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:35.578 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:35.578 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:35.578 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:04:35.578 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:04:35.578 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:35.578 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:35.578 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:35.578 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:35.578 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:04:35.578 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:04:35.579 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:35.579 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:35.579 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:35.579 list of memzone associated elements. size: 602.264404 MiB 00:04:35.579 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:35.579 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:35.579 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:35.579 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:35.579 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:35.579 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2473694_0 00:04:35.579 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:35.579 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2473694_0 00:04:35.579 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:35.579 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2473694_0 00:04:35.579 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:35.579 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:35.579 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:35.579 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:35.579 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:35.579 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2473694 00:04:35.579 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:35.579 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2473694 00:04:35.579 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:35.579 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2473694 00:04:35.579 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:35.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:35.579 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:35.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:35.579 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:35.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:35.579 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:35.579 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:35.579 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:35.579 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2473694 00:04:35.579 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:35.579 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2473694 00:04:35.579 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:35.579 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2473694 00:04:35.579 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:35.579 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2473694 00:04:35.579 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:35.579 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2473694 00:04:35.579 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:04:35.579 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:35.579 element at address: 0x200013878780 with size: 0.500549 MiB 00:04:35.579 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:35.579 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:04:35.579 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:35.579 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:35.579 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2473694 00:04:35.579 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:04:35.579 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:35.579 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:04:35.579 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:35.579 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:35.579 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2473694 00:04:35.579 element at address: 0x20002846f540 with size: 0.002502 MiB 00:04:35.579 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:35.579 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:04:35.579 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2473694 00:04:35.579 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:35.579 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2473694 00:04:35.579 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:04:35.579 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:35.579 10:23:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:35.579 10:23:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2473694 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 2473694 ']' 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 2473694 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2473694 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2473694' 00:04:35.579 killing process with pid 2473694 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 2473694 00:04:35.579 10:23:51 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 2473694 00:04:36.520 00:04:36.520 real 0m1.975s 00:04:36.520 user 0m1.974s 00:04:36.520 sys 0m0.457s 00:04:36.520 10:23:52 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:36.520 10:23:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.520 ************************************ 00:04:36.520 END TEST dpdk_mem_utility 00:04:36.520 ************************************ 00:04:36.520 10:23:52 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:36.520 10:23:52 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:36.520 10:23:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:36.520 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.520 ************************************ 00:04:36.520 START TEST event 00:04:36.520 ************************************ 00:04:36.520 10:23:52 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:36.520 * Looking for test storage... 00:04:36.520 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:04:36.520 10:23:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:36.520 10:23:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:36.520 10:23:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.520 10:23:52 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:04:36.520 10:23:52 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:36.520 10:23:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.780 ************************************ 00:04:36.780 START TEST event_perf 00:04:36.780 ************************************ 00:04:36.780 10:23:52 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.780 Running I/O for 1 seconds...[2024-05-15 10:23:52.450779] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:36.780 [2024-05-15 10:23:52.450886] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474054 ] 00:04:36.780 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.780 [2024-05-15 10:23:52.565387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.038 [2024-05-15 10:23:52.660480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.038 [2024-05-15 10:23:52.660502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.038 [2024-05-15 10:23:52.660609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.038 [2024-05-15 10:23:52.660617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.972 Running I/O for 1 seconds... 00:04:37.972 lcore 0: 166704 00:04:37.972 lcore 1: 166705 00:04:37.972 lcore 2: 166705 00:04:37.972 lcore 3: 166703 00:04:37.972 done. 00:04:37.972 00:04:37.972 real 0m1.388s 00:04:37.972 user 0m4.245s 00:04:37.972 sys 0m0.125s 00:04:37.972 10:23:53 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:37.972 10:23:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.972 ************************************ 00:04:37.972 END TEST event_perf 00:04:37.972 ************************************ 00:04:37.972 10:23:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:37.972 10:23:53 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:37.972 10:23:53 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:37.972 10:23:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.231 ************************************ 00:04:38.231 START TEST event_reactor 00:04:38.231 ************************************ 00:04:38.231 10:23:53 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:38.231 [2024-05-15 10:23:53.902298] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:38.231 [2024-05-15 10:23:53.902404] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474380 ] 00:04:38.231 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.231 [2024-05-15 10:23:54.017677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.492 [2024-05-15 10:23:54.110453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.428 test_start 00:04:39.428 oneshot 00:04:39.428 tick 100 00:04:39.428 tick 100 00:04:39.428 tick 250 00:04:39.428 tick 100 00:04:39.428 tick 100 00:04:39.428 tick 100 00:04:39.428 tick 250 00:04:39.428 tick 500 00:04:39.428 tick 100 00:04:39.428 tick 100 00:04:39.428 tick 250 00:04:39.428 tick 100 00:04:39.428 tick 100 00:04:39.428 test_end 00:04:39.428 00:04:39.428 real 0m1.386s 00:04:39.428 user 0m1.262s 00:04:39.428 sys 0m0.116s 00:04:39.428 10:23:55 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:39.428 10:23:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:39.428 ************************************ 00:04:39.428 END TEST event_reactor 00:04:39.428 ************************************ 00:04:39.428 10:23:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.428 10:23:55 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:39.428 10:23:55 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.428 10:23:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.689 ************************************ 00:04:39.689 START TEST event_reactor_perf 00:04:39.689 ************************************ 00:04:39.689 10:23:55 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.689 [2024-05-15 10:23:55.345101] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:39.689 [2024-05-15 10:23:55.345208] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474695 ] 00:04:39.689 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.689 [2024-05-15 10:23:55.460932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.689 [2024-05-15 10:23:55.551628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.073 test_start 00:04:41.073 test_end 00:04:41.073 Performance: 425603 events per second 00:04:41.073 00:04:41.073 real 0m1.386s 00:04:41.073 user 0m1.256s 00:04:41.073 sys 0m0.122s 00:04:41.073 10:23:56 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:41.073 10:23:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.073 ************************************ 00:04:41.073 END TEST event_reactor_perf 00:04:41.073 ************************************ 00:04:41.073 10:23:56 event -- event/event.sh@49 -- # uname -s 00:04:41.073 10:23:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:41.073 10:23:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.073 10:23:56 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:41.073 10:23:56 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:41.073 10:23:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.073 ************************************ 00:04:41.073 START TEST event_scheduler 00:04:41.073 ************************************ 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.073 * Looking for test storage... 00:04:41.073 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:04:41.073 10:23:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:41.073 10:23:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2475040 00:04:41.073 10:23:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.073 10:23:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2475040 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 2475040 ']' 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:41.073 10:23:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:41.073 10:23:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.073 [2024-05-15 10:23:56.923166] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:41.073 [2024-05-15 10:23:56.923316] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475040 ] 00:04:41.334 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.334 [2024-05-15 10:23:57.056485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.334 [2024-05-15 10:23:57.156986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.334 [2024-05-15 10:23:57.157105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.334 [2024-05-15 10:23:57.157123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.334 [2024-05-15 10:23:57.157133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:04:41.902 10:23:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.902 POWER: Env isn't set yet! 00:04:41.902 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:41.902 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:41.902 POWER: Cannot set governor of lcore 0 to userspace 00:04:41.902 POWER: Attempting to initialise PSTAT power management... 00:04:41.902 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:41.902 POWER: Initialized successfully for lcore 0 power management 00:04:41.902 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:41.902 POWER: Initialized successfully for lcore 1 power management 00:04:41.902 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:41.902 POWER: Initialized successfully for lcore 2 power management 00:04:41.902 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:41.902 POWER: Initialized successfully for lcore 3 power management 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:41.902 10:23:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:41.902 10:23:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 [2024-05-15 10:23:57.888447] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.161 10:23:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.161 10:23:57 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:42.161 10:23:57 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 ************************************ 00:04:42.161 START TEST scheduler_create_thread 00:04:42.161 ************************************ 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 2 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 3 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 4 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 5 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 6 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 7 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 8 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 9 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 10 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.161 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.105 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.105 10:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:43.105 10:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:43.105 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.105 10:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.043 10:23:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:44.043 00:04:44.043 real 0m1.753s 00:04:44.043 user 0m0.016s 00:04:44.043 sys 0m0.006s 00:04:44.043 10:23:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:44.043 10:23:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.043 ************************************ 00:04:44.043 END TEST scheduler_create_thread 00:04:44.043 ************************************ 00:04:44.043 10:23:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:44.043 10:23:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2475040 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 2475040 ']' 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 2475040 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2475040 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2475040' 00:04:44.043 killing process with pid 2475040 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 2475040 00:04:44.043 10:23:59 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 2475040 00:04:44.305 [2024-05-15 10:24:00.155998] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:44.877 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:44.877 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:44.877 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:44.877 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:44.877 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:44.877 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:44.877 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:44.877 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:44.877 00:04:44.877 real 0m3.847s 00:04:44.877 user 0m6.231s 00:04:44.877 sys 0m0.466s 00:04:44.877 10:24:00 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:44.877 10:24:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.877 ************************************ 00:04:44.877 END TEST event_scheduler 00:04:44.877 ************************************ 00:04:44.877 10:24:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:44.877 10:24:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:44.877 10:24:00 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:44.877 10:24:00 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:44.877 10:24:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.877 ************************************ 00:04:44.877 START TEST app_repeat 00:04:44.877 ************************************ 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2475750 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2475750' 00:04:44.878 Process app_repeat pid: 2475750 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:44.878 spdk_app_start Round 0 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2475750 /var/tmp/spdk-nbd.sock 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2475750 ']' 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:44.878 10:24:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:44.878 10:24:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.878 [2024-05-15 10:24:00.745528] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:04:44.878 [2024-05-15 10:24:00.745670] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475750 ] 00:04:45.139 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.139 [2024-05-15 10:24:00.883549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.139 [2024-05-15 10:24:00.979521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.139 [2024-05-15 10:24:00.979521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.710 10:24:01 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:45.710 10:24:01 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:45.710 10:24:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.970 Malloc0 00:04:45.970 10:24:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.230 Malloc1 00:04:46.230 10:24:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.230 10:24:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.231 10:24:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.231 /dev/nbd0 00:04:46.231 10:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.231 10:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.231 1+0 records in 00:04:46.231 1+0 records out 00:04:46.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193185 s, 21.2 MB/s 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:46.231 10:24:02 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:46.231 10:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.231 10:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.231 10:24:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.495 /dev/nbd1 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.495 1+0 records in 00:04:46.495 1+0 records out 00:04:46.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176646 s, 23.2 MB/s 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:46.495 10:24:02 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.495 10:24:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.824 { 00:04:46.824 "nbd_device": "/dev/nbd0", 00:04:46.824 "bdev_name": "Malloc0" 00:04:46.824 }, 00:04:46.824 { 00:04:46.824 "nbd_device": "/dev/nbd1", 00:04:46.824 "bdev_name": "Malloc1" 00:04:46.824 } 00:04:46.824 ]' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.824 { 00:04:46.824 "nbd_device": "/dev/nbd0", 00:04:46.824 "bdev_name": "Malloc0" 00:04:46.824 }, 00:04:46.824 { 00:04:46.824 "nbd_device": "/dev/nbd1", 00:04:46.824 "bdev_name": "Malloc1" 00:04:46.824 } 00:04:46.824 ]' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.824 /dev/nbd1' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.824 /dev/nbd1' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.824 256+0 records in 00:04:46.824 256+0 records out 00:04:46.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491227 s, 213 MB/s 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.824 256+0 records in 00:04:46.824 256+0 records out 00:04:46.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155737 s, 67.3 MB/s 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.824 256+0 records in 00:04:46.824 256+0 records out 00:04:46.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166548 s, 63.0 MB/s 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.824 10:24:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.085 10:24:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.347 10:24:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.347 10:24:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.608 10:24:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.179 [2024-05-15 10:24:03.746121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.179 [2024-05-15 10:24:03.834300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.179 [2024-05-15 10:24:03.834306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.179 [2024-05-15 10:24:03.908157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.179 [2024-05-15 10:24:03.908205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.720 10:24:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.720 10:24:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:50.720 spdk_app_start Round 1 00:04:50.720 10:24:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2475750 /var/tmp/spdk-nbd.sock 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2475750 ']' 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:50.720 10:24:06 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:50.720 10:24:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.720 Malloc0 00:04:50.981 10:24:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.981 Malloc1 00:04:50.981 10:24:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.981 10:24:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.242 /dev/nbd0 00:04:51.242 10:24:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.242 10:24:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.242 1+0 records in 00:04:51.242 1+0 records out 00:04:51.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185085 s, 22.1 MB/s 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:51.242 10:24:06 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:51.242 10:24:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.242 10:24:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.242 10:24:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.503 /dev/nbd1 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.503 1+0 records in 00:04:51.503 1+0 records out 00:04:51.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277018 s, 14.8 MB/s 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:51.503 10:24:07 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.503 { 00:04:51.503 "nbd_device": "/dev/nbd0", 00:04:51.503 "bdev_name": "Malloc0" 00:04:51.503 }, 00:04:51.503 { 00:04:51.503 "nbd_device": "/dev/nbd1", 00:04:51.503 "bdev_name": "Malloc1" 00:04:51.503 } 00:04:51.503 ]' 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.503 { 00:04:51.503 "nbd_device": "/dev/nbd0", 00:04:51.503 "bdev_name": "Malloc0" 00:04:51.503 }, 00:04:51.503 { 00:04:51.503 "nbd_device": "/dev/nbd1", 00:04:51.503 "bdev_name": "Malloc1" 00:04:51.503 } 00:04:51.503 ]' 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.503 /dev/nbd1' 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.503 /dev/nbd1' 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.503 10:24:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.764 256+0 records in 00:04:51.764 256+0 records out 00:04:51.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045478 s, 231 MB/s 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.764 256+0 records in 00:04:51.764 256+0 records out 00:04:51.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170972 s, 61.3 MB/s 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.764 256+0 records in 00:04:51.764 256+0 records out 00:04:51.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180735 s, 58.0 MB/s 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.764 10:24:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.765 10:24:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.025 10:24:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.286 10:24:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.286 10:24:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.546 10:24:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.806 [2024-05-15 10:24:08.650538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.066 [2024-05-15 10:24:08.741378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.066 [2024-05-15 10:24:08.741378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.066 [2024-05-15 10:24:08.815921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.066 [2024-05-15 10:24:08.815967] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.609 10:24:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.609 10:24:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:55.609 spdk_app_start Round 2 00:04:55.609 10:24:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2475750 /var/tmp/spdk-nbd.sock 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2475750 ']' 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:55.609 10:24:11 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:55.609 10:24:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.871 Malloc0 00:04:55.871 10:24:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.871 Malloc1 00:04:55.871 10:24:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.871 10:24:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.133 /dev/nbd0 00:04:56.133 10:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.133 10:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.133 1+0 records in 00:04:56.133 1+0 records out 00:04:56.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173896 s, 23.6 MB/s 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:56.133 10:24:11 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:56.133 10:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.133 10:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.133 10:24:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.393 /dev/nbd1 00:04:56.393 10:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.393 10:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.393 1+0 records in 00:04:56.393 1+0 records out 00:04:56.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264167 s, 15.5 MB/s 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:56.393 10:24:12 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:56.394 10:24:12 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.394 { 00:04:56.394 "nbd_device": "/dev/nbd0", 00:04:56.394 "bdev_name": "Malloc0" 00:04:56.394 }, 00:04:56.394 { 00:04:56.394 "nbd_device": "/dev/nbd1", 00:04:56.394 "bdev_name": "Malloc1" 00:04:56.394 } 00:04:56.394 ]' 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.394 { 00:04:56.394 "nbd_device": "/dev/nbd0", 00:04:56.394 "bdev_name": "Malloc0" 00:04:56.394 }, 00:04:56.394 { 00:04:56.394 "nbd_device": "/dev/nbd1", 00:04:56.394 "bdev_name": "Malloc1" 00:04:56.394 } 00:04:56.394 ]' 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.394 10:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.394 /dev/nbd1' 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.655 /dev/nbd1' 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.655 256+0 records in 00:04:56.655 256+0 records out 00:04:56.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045596 s, 230 MB/s 00:04:56.655 10:24:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.656 256+0 records in 00:04:56.656 256+0 records out 00:04:56.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148319 s, 70.7 MB/s 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.656 256+0 records in 00:04:56.656 256+0 records out 00:04:56.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168282 s, 62.3 MB/s 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.656 10:24:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.918 10:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.179 10:24:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.179 10:24:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.438 10:24:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.698 [2024-05-15 10:24:13.535602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.959 [2024-05-15 10:24:13.626382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.959 [2024-05-15 10:24:13.626386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.959 [2024-05-15 10:24:13.700852] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.959 [2024-05-15 10:24:13.700904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.505 10:24:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2475750 /var/tmp/spdk-nbd.sock 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2475750 ']' 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:00.505 10:24:16 event.app_repeat -- event/event.sh@39 -- # killprocess 2475750 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 2475750 ']' 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 2475750 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2475750 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2475750' 00:05:00.505 killing process with pid 2475750 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@966 -- # kill 2475750 00:05:00.505 10:24:16 event.app_repeat -- common/autotest_common.sh@971 -- # wait 2475750 00:05:01.076 spdk_app_start is called in Round 0. 00:05:01.076 Shutdown signal received, stop current app iteration 00:05:01.076 Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 reinitialization... 00:05:01.076 spdk_app_start is called in Round 1. 00:05:01.076 Shutdown signal received, stop current app iteration 00:05:01.076 Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 reinitialization... 00:05:01.076 spdk_app_start is called in Round 2. 00:05:01.076 Shutdown signal received, stop current app iteration 00:05:01.076 Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 reinitialization... 00:05:01.076 spdk_app_start is called in Round 3. 00:05:01.076 Shutdown signal received, stop current app iteration 00:05:01.076 10:24:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:01.076 10:24:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:01.076 00:05:01.076 real 0m15.998s 00:05:01.076 user 0m33.578s 00:05:01.076 sys 0m2.179s 00:05:01.076 10:24:16 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:01.076 10:24:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.076 ************************************ 00:05:01.076 END TEST app_repeat 00:05:01.076 ************************************ 00:05:01.076 10:24:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:01.076 10:24:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:01.076 10:24:16 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:01.076 10:24:16 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:01.076 10:24:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.076 ************************************ 00:05:01.076 START TEST cpu_locks 00:05:01.076 ************************************ 00:05:01.076 10:24:16 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:01.076 * Looking for test storage... 00:05:01.076 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:01.076 10:24:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:01.076 10:24:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:01.076 10:24:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:01.076 10:24:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:01.076 10:24:16 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:01.076 10:24:16 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:01.076 10:24:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.076 ************************************ 00:05:01.076 START TEST default_locks 00:05:01.076 ************************************ 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2479727 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2479727 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2479727 ']' 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:01.076 10:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.076 [2024-05-15 10:24:16.943464] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:01.076 [2024-05-15 10:24:16.943574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479727 ] 00:05:01.338 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.338 [2024-05-15 10:24:17.063175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.338 [2024-05-15 10:24:17.156760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.910 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:01.910 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:01.910 10:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2479727 00:05:01.910 10:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2479727 00:05:01.910 10:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.171 lslocks: write error 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2479727 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 2479727 ']' 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 2479727 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2479727 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2479727' 00:05:02.171 killing process with pid 2479727 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 2479727 00:05:02.171 10:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 2479727 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2479727 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2479727 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2479727 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2479727 ']' 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.113 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2479727) - No such process 00:05:03.113 ERROR: process (pid: 2479727) is no longer running 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.113 00:05:03.113 real 0m1.843s 00:05:03.113 user 0m1.792s 00:05:03.113 sys 0m0.490s 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:03.113 10:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.113 ************************************ 00:05:03.113 END TEST default_locks 00:05:03.113 ************************************ 00:05:03.113 10:24:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:03.113 10:24:18 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:03.113 10:24:18 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:03.113 10:24:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.113 ************************************ 00:05:03.113 START TEST default_locks_via_rpc 00:05:03.113 ************************************ 00:05:03.113 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2480066 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2480066 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2480066 ']' 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:03.114 10:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.114 [2024-05-15 10:24:18.844362] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:03.114 [2024-05-15 10:24:18.844473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480066 ] 00:05:03.114 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.114 [2024-05-15 10:24:18.961759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.376 [2024-05-15 10:24:19.055458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 2480066 ']' 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2480066' 00:05:03.947 killing process with pid 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 2480066 00:05:03.947 10:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 2480066 00:05:04.890 00:05:04.890 real 0m1.810s 00:05:04.890 user 0m1.751s 00:05:04.890 sys 0m0.490s 00:05:04.890 10:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:04.890 10:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.890 ************************************ 00:05:04.890 END TEST default_locks_via_rpc 00:05:04.890 ************************************ 00:05:04.890 10:24:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:04.890 10:24:20 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:04.890 10:24:20 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:04.890 10:24:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.890 ************************************ 00:05:04.890 START TEST non_locking_app_on_locked_coremask 00:05:04.890 ************************************ 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2480400 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2480400 /var/tmp/spdk.sock 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2480400 ']' 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:04.890 10:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.890 [2024-05-15 10:24:20.716780] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:04.890 [2024-05-15 10:24:20.716887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480400 ] 00:05:05.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.150 [2024-05-15 10:24:20.832073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.150 [2024-05-15 10:24:20.925799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2480632 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2480632 /var/tmp/spdk2.sock 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2480632 ']' 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:05.722 10:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.722 [2024-05-15 10:24:21.488634] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:05.722 [2024-05-15 10:24:21.488760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480632 ] 00:05:05.722 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.003 [2024-05-15 10:24:21.641890] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.003 [2024-05-15 10:24:21.641933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.003 [2024-05-15 10:24:21.833073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.987 lslocks: write error 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2480400 ']' 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2480400' 00:05:06.987 killing process with pid 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2480400 00:05:06.987 10:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2480400 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2480632 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2480632 ']' 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2480632 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2480632 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2480632' 00:05:08.896 killing process with pid 2480632 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2480632 00:05:08.896 10:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2480632 00:05:09.836 00:05:09.836 real 0m4.732s 00:05:09.836 user 0m4.731s 00:05:09.836 sys 0m1.023s 00:05:09.836 10:24:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:09.836 10:24:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.837 ************************************ 00:05:09.837 END TEST non_locking_app_on_locked_coremask 00:05:09.837 ************************************ 00:05:09.837 10:24:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:09.837 10:24:25 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:09.837 10:24:25 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:09.837 10:24:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.837 ************************************ 00:05:09.837 START TEST locking_app_on_unlocked_coremask 00:05:09.837 ************************************ 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2481335 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2481335 /var/tmp/spdk.sock 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2481335 ']' 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:09.837 10:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.837 [2024-05-15 10:24:25.515572] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:09.837 [2024-05-15 10:24:25.515688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481335 ] 00:05:09.837 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.837 [2024-05-15 10:24:25.618867] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.837 [2024-05-15 10:24:25.618900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.096 [2024-05-15 10:24:25.711070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.354 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2481600 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2481600 /var/tmp/spdk2.sock 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2481600 ']' 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.355 10:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.614 [2024-05-15 10:24:26.289100] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:10.614 [2024-05-15 10:24:26.289222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481600 ] 00:05:10.614 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.614 [2024-05-15 10:24:26.443337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.874 [2024-05-15 10:24:26.627875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.443 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:11.443 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:11.443 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2481600 00:05:11.443 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2481600 00:05:11.443 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.012 lslocks: write error 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2481335 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2481335 ']' 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2481335 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2481335 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2481335' 00:05:12.012 killing process with pid 2481335 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2481335 00:05:12.012 10:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2481335 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2481600 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2481600 ']' 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2481600 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2481600 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2481600' 00:05:13.924 killing process with pid 2481600 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2481600 00:05:13.924 10:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2481600 00:05:14.496 00:05:14.496 real 0m4.764s 00:05:14.496 user 0m4.869s 00:05:14.496 sys 0m0.937s 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.496 ************************************ 00:05:14.496 END TEST locking_app_on_unlocked_coremask 00:05:14.496 ************************************ 00:05:14.496 10:24:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:14.496 10:24:30 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:14.496 10:24:30 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:14.496 10:24:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.496 ************************************ 00:05:14.496 START TEST locking_app_on_locked_coremask 00:05:14.496 ************************************ 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2482261 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2482261 /var/tmp/spdk.sock 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2482261 ']' 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.496 10:24:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.496 [2024-05-15 10:24:30.362815] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:14.496 [2024-05-15 10:24:30.362950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482261 ] 00:05:14.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.756 [2024-05-15 10:24:30.494997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.756 [2024-05-15 10:24:30.588719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2482550 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2482550 /var/tmp/spdk2.sock 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2482550 /var/tmp/spdk2.sock 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2482550 /var/tmp/spdk2.sock 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2482550 ']' 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:15.327 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.327 [2024-05-15 10:24:31.175236] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:15.327 [2024-05-15 10:24:31.175378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482550 ] 00:05:15.588 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.588 [2024-05-15 10:24:31.344522] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2482261 has claimed it. 00:05:15.588 [2024-05-15 10:24:31.344574] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.160 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2482550) - No such process 00:05:16.160 ERROR: process (pid: 2482550) is no longer running 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2482261 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2482261 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.160 lslocks: write error 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2482261 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2482261 ']' 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2482261 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:16.160 10:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2482261 00:05:16.160 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:16.160 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:16.160 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2482261' 00:05:16.160 killing process with pid 2482261 00:05:16.160 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2482261 00:05:16.160 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2482261 00:05:17.102 00:05:17.102 real 0m2.630s 00:05:17.102 user 0m2.703s 00:05:17.102 sys 0m0.743s 00:05:17.102 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:17.102 10:24:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.102 ************************************ 00:05:17.102 END TEST locking_app_on_locked_coremask 00:05:17.102 ************************************ 00:05:17.102 10:24:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:17.102 10:24:32 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:17.102 10:24:32 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:17.102 10:24:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.102 ************************************ 00:05:17.102 START TEST locking_overlapped_coremask 00:05:17.102 ************************************ 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2482888 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2482888 /var/tmp/spdk.sock 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2482888 ']' 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.102 10:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:17.362 [2024-05-15 10:24:33.041637] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:17.362 [2024-05-15 10:24:33.041762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482888 ] 00:05:17.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.362 [2024-05-15 10:24:33.155692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.622 [2024-05-15 10:24:33.251353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.622 [2024-05-15 10:24:33.251450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.622 [2024-05-15 10:24:33.251455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2483013 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2483013 /var/tmp/spdk2.sock 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2483013 /var/tmp/spdk2.sock 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2483013 /var/tmp/spdk2.sock 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2483013 ']' 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:17.880 10:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.139 [2024-05-15 10:24:33.804693] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:18.139 [2024-05-15 10:24:33.804780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483013 ] 00:05:18.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.139 [2024-05-15 10:24:33.935405] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2482888 has claimed it. 00:05:18.139 [2024-05-15 10:24:33.935453] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:18.707 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2483013) - No such process 00:05:18.707 ERROR: process (pid: 2483013) is no longer running 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2482888 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 2482888 ']' 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 2482888 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2482888 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2482888' 00:05:18.707 killing process with pid 2482888 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 2482888 00:05:18.707 10:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 2482888 00:05:19.646 00:05:19.646 real 0m2.285s 00:05:19.646 user 0m5.917s 00:05:19.646 sys 0m0.520s 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 ************************************ 00:05:19.646 END TEST locking_overlapped_coremask 00:05:19.646 ************************************ 00:05:19.646 10:24:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:19.646 10:24:35 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:19.646 10:24:35 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:19.646 10:24:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 ************************************ 00:05:19.646 START TEST locking_overlapped_coremask_via_rpc 00:05:19.646 ************************************ 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2483369 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2483369 /var/tmp/spdk.sock 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2483369 ']' 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:19.646 10:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 [2024-05-15 10:24:35.392129] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:19.646 [2024-05-15 10:24:35.392241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483369 ] 00:05:19.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.646 [2024-05-15 10:24:35.508212] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.646 [2024-05-15 10:24:35.508245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.907 [2024-05-15 10:24:35.602331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.907 [2024-05-15 10:24:35.602420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.907 [2024-05-15 10:24:35.602426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2483525 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2483525 /var/tmp/spdk2.sock 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2483525 ']' 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:20.479 10:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.479 [2024-05-15 10:24:36.204240] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:20.479 [2024-05-15 10:24:36.204383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483525 ] 00:05:20.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.738 [2024-05-15 10:24:36.368998] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.738 [2024-05-15 10:24:36.369037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.738 [2024-05-15 10:24:36.558391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.738 [2024-05-15 10:24:36.562113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.738 [2024-05-15 10:24:36.562145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.678 [2024-05-15 10:24:37.282185] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2483369 has claimed it. 00:05:21.678 request: 00:05:21.678 { 00:05:21.678 "method": "framework_enable_cpumask_locks", 00:05:21.678 "req_id": 1 00:05:21.678 } 00:05:21.678 Got JSON-RPC error response 00:05:21.678 response: 00:05:21.678 { 00:05:21.678 "code": -32603, 00:05:21.678 "message": "Failed to claim CPU core: 2" 00:05:21.678 } 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2483369 /var/tmp/spdk.sock 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2483369 ']' 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2483525 /var/tmp/spdk2.sock 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2483525 ']' 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:21.678 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.937 00:05:21.937 real 0m2.310s 00:05:21.937 user 0m0.737s 00:05:21.937 sys 0m0.145s 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:21.937 10:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.937 ************************************ 00:05:21.937 END TEST locking_overlapped_coremask_via_rpc 00:05:21.937 ************************************ 00:05:21.937 10:24:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:21.937 10:24:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2483369 ]] 00:05:21.937 10:24:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2483369 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2483369 ']' 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2483369 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2483369 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2483369' 00:05:21.937 killing process with pid 2483369 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2483369 00:05:21.937 10:24:37 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2483369 00:05:22.876 10:24:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2483525 ]] 00:05:22.876 10:24:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2483525 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2483525 ']' 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2483525 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2483525 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2483525' 00:05:22.876 killing process with pid 2483525 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2483525 00:05:22.876 10:24:38 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2483525 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2483369 ]] 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2483369 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2483369 ']' 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2483369 00:05:23.814 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2483369) - No such process 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2483369 is not found' 00:05:23.814 Process with pid 2483369 is not found 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2483525 ]] 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2483525 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2483525 ']' 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2483525 00:05:23.814 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2483525) - No such process 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2483525 is not found' 00:05:23.814 Process with pid 2483525 is not found 00:05:23.814 10:24:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.814 00:05:23.814 real 0m22.677s 00:05:23.814 user 0m37.379s 00:05:23.814 sys 0m5.444s 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.814 10:24:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.814 ************************************ 00:05:23.814 END TEST cpu_locks 00:05:23.814 ************************************ 00:05:23.814 00:05:23.814 real 0m47.150s 00:05:23.814 user 1m24.128s 00:05:23.814 sys 0m8.761s 00:05:23.814 10:24:39 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.814 10:24:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.814 ************************************ 00:05:23.814 END TEST event 00:05:23.814 ************************************ 00:05:23.814 10:24:39 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:23.814 10:24:39 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:23.814 10:24:39 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:23.814 10:24:39 -- common/autotest_common.sh@10 -- # set +x 00:05:23.814 ************************************ 00:05:23.814 START TEST thread 00:05:23.814 ************************************ 00:05:23.814 10:24:39 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:23.814 * Looking for test storage... 00:05:23.814 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:05:23.814 10:24:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.814 10:24:39 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:23.814 10:24:39 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:23.814 10:24:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.814 ************************************ 00:05:23.814 START TEST thread_poller_perf 00:05:23.814 ************************************ 00:05:23.814 10:24:39 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.814 [2024-05-15 10:24:39.654559] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:23.814 [2024-05-15 10:24:39.654670] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484228 ] 00:05:24.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.071 [2024-05-15 10:24:39.770430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.071 [2024-05-15 10:24:39.863562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.071 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:25.486 ====================================== 00:05:25.486 busy:1908513214 (cyc) 00:05:25.486 total_run_count: 399000 00:05:25.486 tsc_hz: 1900000000 (cyc) 00:05:25.486 ====================================== 00:05:25.486 poller_cost: 4783 (cyc), 2517 (nsec) 00:05:25.486 00:05:25.486 real 0m1.400s 00:05:25.486 user 0m1.261s 00:05:25.486 sys 0m0.134s 00:05:25.486 10:24:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:25.486 10:24:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.486 ************************************ 00:05:25.486 END TEST thread_poller_perf 00:05:25.486 ************************************ 00:05:25.486 10:24:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.486 10:24:41 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:25.486 10:24:41 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.486 10:24:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.486 ************************************ 00:05:25.486 START TEST thread_poller_perf 00:05:25.486 ************************************ 00:05:25.486 10:24:41 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.486 [2024-05-15 10:24:41.106204] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:25.486 [2024-05-15 10:24:41.106316] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484539 ] 00:05:25.486 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.486 [2024-05-15 10:24:41.220283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.486 [2024-05-15 10:24:41.313914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.486 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:26.870 ====================================== 00:05:26.870 busy:1901770318 (cyc) 00:05:26.870 total_run_count: 5368000 00:05:26.870 tsc_hz: 1900000000 (cyc) 00:05:26.870 ====================================== 00:05:26.870 poller_cost: 354 (cyc), 186 (nsec) 00:05:26.870 00:05:26.870 real 0m1.387s 00:05:26.870 user 0m1.264s 00:05:26.870 sys 0m0.117s 00:05:26.870 10:24:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.870 10:24:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 ************************************ 00:05:26.870 END TEST thread_poller_perf 00:05:26.870 ************************************ 00:05:26.870 10:24:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:26.870 00:05:26.870 real 0m2.966s 00:05:26.870 user 0m2.594s 00:05:26.870 sys 0m0.369s 00:05:26.870 10:24:42 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.870 10:24:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 ************************************ 00:05:26.870 END TEST thread 00:05:26.870 ************************************ 00:05:26.870 10:24:42 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:26.870 10:24:42 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:26.870 10:24:42 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.870 10:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 ************************************ 00:05:26.870 START TEST accel 00:05:26.870 ************************************ 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:26.870 * Looking for test storage... 00:05:26.870 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:05:26.870 10:24:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:26.870 10:24:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:26.870 10:24:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.870 10:24:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2484912 00:05:26.870 10:24:42 accel -- accel/accel.sh@63 -- # waitforlisten 2484912 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@828 -- # '[' -z 2484912 ']' 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:26.870 10:24:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 10:24:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:26.870 10:24:42 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:26.870 10:24:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.870 10:24:42 accel -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:26.870 10:24:42 accel -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:26.870 10:24:42 accel -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:26.870 10:24:42 accel -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:26.870 10:24:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.870 10:24:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.870 10:24:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:26.870 10:24:42 accel -- accel/accel.sh@41 -- # jq -r . 00:05:26.870 [2024-05-15 10:24:42.710540] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:26.870 [2024-05-15 10:24:42.710650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484912 ] 00:05:27.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.129 [2024-05-15 10:24:42.828924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.129 [2024-05-15 10:24:42.923447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.129 [2024-05-15 10:24:42.927978] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:27.129 [2024-05-15 10:24:42.935929] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@861 -- # return 0 00:05:35.267 10:24:50 accel -- accel/accel.sh@65 -- # [[ 1 -gt 0 ]] 00:05:35.267 10:24:50 accel -- accel/accel.sh@65 -- # check_save_config dsa_scan_accel_module 00:05:35.267 10:24:50 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.267 10:24:50 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:05:35.267 10:24:50 accel -- accel/accel.sh@56 -- # grep dsa_scan_accel_module 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.267 "method": "dsa_scan_accel_module", 00:05:35.267 10:24:50 accel -- accel/accel.sh@66 -- # [[ 1 -gt 0 ]] 00:05:35.267 10:24:50 accel -- accel/accel.sh@66 -- # check_save_config iaa_scan_accel_module 00:05:35.267 10:24:50 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:05:35.267 10:24:50 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.267 10:24:50 accel -- accel/accel.sh@56 -- # grep iaa_scan_accel_module 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.267 "method": "iaa_scan_accel_module" 00:05:35.267 10:24:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:35.267 10:24:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:35.267 10:24:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:35.267 10:24:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.267 10:24:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:35.267 10:24:50 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.267 10:24:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:35.267 10:24:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:35.267 10:24:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:35.268 10:24:50 accel -- accel/accel.sh@75 -- # killprocess 2484912 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@947 -- # '[' -z 2484912 ']' 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@951 -- # kill -0 2484912 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@952 -- # uname 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2484912 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2484912' 00:05:35.268 killing process with pid 2484912 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@966 -- # kill 2484912 00:05:35.268 10:24:50 accel -- common/autotest_common.sh@971 -- # wait 2484912 00:05:37.810 10:24:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:37.810 10:24:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:37.810 10:24:53 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:37.810 10:24:53 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:37.810 10:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.810 10:24:53 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:37.810 10:24:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:37.810 10:24:53 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:37.810 10:24:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:37.810 10:24:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:37.810 10:24:53 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:37.810 10:24:53 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:37.810 10:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.810 ************************************ 00:05:37.810 START TEST accel_missing_filename 00:05:37.810 ************************************ 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:37.810 10:24:53 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:37.810 10:24:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:37.810 [2024-05-15 10:24:53.419942] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:37.810 [2024-05-15 10:24:53.420048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487080 ] 00:05:37.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.810 [2024-05-15 10:24:53.535185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.810 [2024-05-15 10:24:53.635671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.810 [2024-05-15 10:24:53.640163] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:37.810 [2024-05-15 10:24:53.648129] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:44.388 [2024-05-15 10:25:00.020588] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:46.298 [2024-05-15 10:25:01.877009] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:46.298 A filename is required. 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:46.298 00:05:46.298 real 0m8.648s 00:05:46.298 user 0m2.291s 00:05:46.298 sys 0m0.219s 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:46.298 10:25:02 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:46.298 ************************************ 00:05:46.298 END TEST accel_missing_filename 00:05:46.298 ************************************ 00:05:46.298 10:25:02 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:46.298 10:25:02 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:46.298 10:25:02 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:46.298 10:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.298 ************************************ 00:05:46.298 START TEST accel_compress_verify 00:05:46.298 ************************************ 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:46.298 10:25:02 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:46.298 10:25:02 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:46.298 [2024-05-15 10:25:02.112106] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:46.298 [2024-05-15 10:25:02.112173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488812 ] 00:05:46.298 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.557 [2024-05-15 10:25:02.196650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.557 [2024-05-15 10:25:02.293570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.557 [2024-05-15 10:25:02.298040] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:46.557 [2024-05-15 10:25:02.306004] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:53.133 [2024-05-15 10:25:08.708396] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.042 [2024-05-15 10:25:10.562831] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:55.042 00:05:55.042 Compression does not support the verify option, aborting. 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:55.042 00:05:55.042 real 0m8.632s 00:05:55.042 user 0m2.286s 00:05:55.042 sys 0m0.209s 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.042 10:25:10 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:55.042 ************************************ 00:05:55.042 END TEST accel_compress_verify 00:05:55.042 ************************************ 00:05:55.042 10:25:10 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:55.042 10:25:10 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:55.042 10:25:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:55.042 10:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.042 ************************************ 00:05:55.042 START TEST accel_wrong_workload 00:05:55.042 ************************************ 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:55.042 10:25:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:55.042 Unsupported workload type: foobar 00:05:55.042 [2024-05-15 10:25:10.798166] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:55.042 accel_perf options: 00:05:55.042 [-h help message] 00:05:55.042 [-q queue depth per core] 00:05:55.042 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.042 [-T number of threads per core 00:05:55.042 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.042 [-t time in seconds] 00:05:55.042 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.042 [ dif_verify, , dif_generate, dif_generate_copy 00:05:55.042 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.042 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.042 [-S for crc32c workload, use this seed value (default 0) 00:05:55.042 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.042 [-f for fill workload, use this BYTE value (default 255) 00:05:55.042 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.042 [-y verify result if this switch is on] 00:05:55.042 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.042 Can be used to spread operations across a wider range of memory. 00:05:55.042 Error: writing output failed: Broken pipe 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:55.042 00:05:55.042 real 0m0.035s 00:05:55.042 user 0m0.042s 00:05:55.042 sys 0m0.024s 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.042 10:25:10 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:55.042 ************************************ 00:05:55.042 END TEST accel_wrong_workload 00:05:55.042 ************************************ 00:05:55.042 10:25:10 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:55.042 10:25:10 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:55.042 10:25:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:55.042 10:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.042 ************************************ 00:05:55.042 START TEST accel_negative_buffers 00:05:55.042 ************************************ 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:55.042 10:25:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:55.042 -x option must be non-negative. 00:05:55.042 [2024-05-15 10:25:10.891003] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:55.042 accel_perf options: 00:05:55.042 [-h help message] 00:05:55.042 [-q queue depth per core] 00:05:55.042 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.042 [-T number of threads per core 00:05:55.042 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.042 [-t time in seconds] 00:05:55.042 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.042 [ dif_verify, , dif_generate, dif_generate_copy 00:05:55.042 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.042 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.042 [-S for crc32c workload, use this seed value (default 0) 00:05:55.042 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.042 [-f for fill workload, use this BYTE value (default 255) 00:05:55.042 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.042 [-y verify result if this switch is on] 00:05:55.042 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.042 Can be used to spread operations across a wider range of memory. 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:55.042 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:55.042 00:05:55.042 real 0m0.051s 00:05:55.043 user 0m0.055s 00:05:55.043 sys 0m0.028s 00:05:55.043 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.043 10:25:10 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:55.043 ************************************ 00:05:55.043 END TEST accel_negative_buffers 00:05:55.043 ************************************ 00:05:55.304 10:25:10 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:55.304 10:25:10 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:55.304 10:25:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:55.304 10:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.304 ************************************ 00:05:55.304 START TEST accel_crc32c 00:05:55.304 ************************************ 00:05:55.304 10:25:10 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:55.304 10:25:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:55.304 [2024-05-15 10:25:11.001060] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:05:55.304 [2024-05-15 10:25:11.001160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490637 ] 00:05:55.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.304 [2024-05-15 10:25:11.114304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.564 [2024-05-15 10:25:11.213977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.564 [2024-05-15 10:25:11.218519] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:55.564 [2024-05-15 10:25:11.226481] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=dsa 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=dsa 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.207 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.208 10:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.756 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.757 10:25:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.017 10:25:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:05.017 10:25:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:05.017 10:25:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:05.017 00:06:05.017 real 0m9.670s 00:06:05.017 user 0m3.280s 00:06:05.017 sys 0m0.226s 00:06:05.017 10:25:20 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.017 10:25:20 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:05.017 ************************************ 00:06:05.017 END TEST accel_crc32c 00:06:05.017 ************************************ 00:06:05.017 10:25:20 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:05.017 10:25:20 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:05.017 10:25:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.017 10:25:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.017 ************************************ 00:06:05.017 START TEST accel_crc32c_C2 00:06:05.017 ************************************ 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.017 10:25:20 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:05.017 [2024-05-15 10:25:20.735948] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:06:05.017 [2024-05-15 10:25:20.736061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492661 ] 00:06:05.017 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.017 [2024-05-15 10:25:20.851423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.279 [2024-05-15 10:25:20.951464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.279 [2024-05-15 10:25:20.955988] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:05.279 [2024-05-15 10:25:20.963948] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=dsa 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.851 10:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:15.142 00:06:15.142 real 0m9.671s 00:06:15.142 user 0m3.279s 00:06:15.142 sys 0m0.218s 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.142 10:25:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:15.142 ************************************ 00:06:15.142 END TEST accel_crc32c_C2 00:06:15.142 ************************************ 00:06:15.142 10:25:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:15.142 10:25:30 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:15.142 10:25:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.142 10:25:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.142 ************************************ 00:06:15.142 START TEST accel_copy 00:06:15.142 ************************************ 00:06:15.142 10:25:30 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:15.142 10:25:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:15.142 [2024-05-15 10:25:30.462027] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:06:15.142 [2024-05-15 10:25:30.462131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494520 ] 00:06:15.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.142 [2024-05-15 10:25:30.552263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.142 [2024-05-15 10:25:30.650597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.142 [2024-05-15 10:25:30.655095] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:15.142 [2024-05-15 10:25:30.663053] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=dsa 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=dsa 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.744 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.745 10:25:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.290 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:24.291 10:25:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:24.291 00:06:24.291 real 0m9.676s 00:06:24.291 user 0m3.289s 00:06:24.291 sys 0m0.221s 00:06:24.291 10:25:40 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:24.291 10:25:40 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.291 ************************************ 00:06:24.291 END TEST accel_copy 00:06:24.291 ************************************ 00:06:24.291 10:25:40 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.291 10:25:40 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:24.291 10:25:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:24.291 10:25:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.291 ************************************ 00:06:24.291 START TEST accel_fill 00:06:24.291 ************************************ 00:06:24.291 10:25:40 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.291 10:25:40 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:24.291 10:25:40 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:24.291 10:25:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.291 10:25:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.291 10:25:40 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:24.553 10:25:40 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:24.553 [2024-05-15 10:25:40.196588] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:06:24.553 [2024-05-15 10:25:40.196696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496368 ] 00:06:24.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.553 [2024-05-15 10:25:40.313946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.553 [2024-05-15 10:25:40.417488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.553 [2024-05-15 10:25:40.422028] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:24.814 [2024-05-15 10:25:40.429986] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.401 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=dsa 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=dsa 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.402 10:25:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:34.014 10:25:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:34.014 00:06:34.014 real 0m9.702s 00:06:34.014 user 0m3.285s 00:06:34.014 sys 0m0.249s 00:06:34.014 10:25:49 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:34.014 10:25:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:34.014 ************************************ 00:06:34.014 END TEST accel_fill 00:06:34.014 ************************************ 00:06:34.014 10:25:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:34.014 10:25:49 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:34.014 10:25:49 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:34.014 10:25:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.276 ************************************ 00:06:34.276 START TEST accel_copy_crc32c 00:06:34.276 ************************************ 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:34.276 10:25:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:34.276 [2024-05-15 10:25:49.951134] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:06:34.276 [2024-05-15 10:25:49.951240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498389 ] 00:06:34.276 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.276 [2024-05-15 10:25:50.072415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.537 [2024-05-15 10:25:50.179441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.537 [2024-05-15 10:25:50.183969] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:34.537 [2024-05-15 10:25:50.191927] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=dsa 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=dsa 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.118 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.119 10:25:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:44.414 00:06:44.414 real 0m9.681s 00:06:44.414 user 0m3.292s 00:06:44.414 sys 0m0.228s 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:44.414 10:25:59 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:44.414 ************************************ 00:06:44.414 END TEST accel_copy_crc32c 00:06:44.414 ************************************ 00:06:44.414 10:25:59 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:44.414 10:25:59 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:44.414 10:25:59 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:44.414 10:25:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.414 ************************************ 00:06:44.414 START TEST accel_copy_crc32c_C2 00:06:44.414 ************************************ 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.414 10:25:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:44.414 [2024-05-15 10:25:59.689379] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:06:44.414 [2024-05-15 10:25:59.689453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500182 ] 00:06:44.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.414 [2024-05-15 10:25:59.778131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.414 [2024-05-15 10:25:59.877701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.414 [2024-05-15 10:25:59.882221] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:44.414 [2024-05-15 10:25:59.890185] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=dsa 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.988 10:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:53.522 10:26:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:53.522 00:06:53.522 real 0m9.693s 00:06:53.523 user 0m3.307s 00:06:53.523 sys 0m0.209s 00:06:53.523 10:26:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:53.523 10:26:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:53.523 ************************************ 00:06:53.523 END TEST accel_copy_crc32c_C2 00:06:53.523 ************************************ 00:06:53.523 10:26:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:53.523 10:26:09 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:53.523 10:26:09 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:53.523 10:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.782 ************************************ 00:06:53.782 START TEST accel_dualcast 00:06:53.782 ************************************ 00:06:53.782 10:26:09 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:53.782 10:26:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:53.783 10:26:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:53.783 [2024-05-15 10:26:09.441162] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:06:53.783 [2024-05-15 10:26:09.441276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502262 ] 00:06:53.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.783 [2024-05-15 10:26:09.556289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.783 [2024-05-15 10:26:09.653860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.043 [2024-05-15 10:26:09.658413] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:54.043 [2024-05-15 10:26:09.666348] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.623 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dsa 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=dsa 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.624 10:26:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:03.922 10:26:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:03.922 00:07:03.922 real 0m9.670s 00:07:03.922 user 0m3.280s 00:07:03.922 sys 0m0.226s 00:07:03.922 10:26:19 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:03.922 10:26:19 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:03.922 ************************************ 00:07:03.922 END TEST accel_dualcast 00:07:03.922 ************************************ 00:07:03.922 10:26:19 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:03.922 10:26:19 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:03.922 10:26:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:03.922 10:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.922 ************************************ 00:07:03.922 START TEST accel_compare 00:07:03.922 ************************************ 00:07:03.922 10:26:19 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:03.922 10:26:19 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:03.922 [2024-05-15 10:26:19.168688] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:07:03.922 [2024-05-15 10:26:19.168793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504055 ] 00:07:03.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.922 [2024-05-15 10:26:19.283201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.922 [2024-05-15 10:26:19.381768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.922 [2024-05-15 10:26:19.386295] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:03.922 [2024-05-15 10:26:19.394257] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:10.546 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.546 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.546 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=dsa 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=dsa 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.547 10:26:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:13.093 10:26:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:13.093 00:07:13.093 real 0m9.676s 00:07:13.093 user 0m3.281s 00:07:13.093 sys 0m0.226s 00:07:13.093 10:26:28 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:13.093 10:26:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:13.093 ************************************ 00:07:13.093 END TEST accel_compare 00:07:13.093 ************************************ 00:07:13.093 10:26:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:13.093 10:26:28 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:13.093 10:26:28 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:13.093 10:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.093 ************************************ 00:07:13.093 START TEST accel_xor 00:07:13.094 ************************************ 00:07:13.094 10:26:28 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:13.094 10:26:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:13.094 [2024-05-15 10:26:28.907015] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:07:13.094 [2024-05-15 10:26:28.907153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506042 ] 00:07:13.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.354 [2024-05-15 10:26:29.024631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.354 [2024-05-15 10:26:29.123681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.354 [2024-05-15 10:26:29.128200] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:13.354 [2024-05-15 10:26:29.136160] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.943 10:26:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.243 00:07:23.243 real 0m9.661s 00:07:23.243 user 0m3.273s 00:07:23.243 sys 0m0.229s 00:07:23.243 10:26:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:23.243 10:26:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.243 ************************************ 00:07:23.243 END TEST accel_xor 00:07:23.243 ************************************ 00:07:23.243 10:26:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:23.243 10:26:38 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:07:23.243 10:26:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:23.243 10:26:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.243 ************************************ 00:07:23.243 START TEST accel_xor 00:07:23.243 ************************************ 00:07:23.243 10:26:38 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:23.243 10:26:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:23.243 [2024-05-15 10:26:38.640663] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:07:23.243 [2024-05-15 10:26:38.640796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507927 ] 00:07:23.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.243 [2024-05-15 10:26:38.771689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.243 [2024-05-15 10:26:38.870445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.243 [2024-05-15 10:26:38.875003] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:23.243 [2024-05-15 10:26:38.882947] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.822 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.823 10:26:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:33.121 10:26:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.121 00:07:33.121 real 0m9.704s 00:07:33.121 user 0m3.299s 00:07:33.121 sys 0m0.248s 00:07:33.121 10:26:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:33.121 10:26:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:33.121 ************************************ 00:07:33.121 END TEST accel_xor 00:07:33.121 ************************************ 00:07:33.121 10:26:48 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:33.121 10:26:48 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:07:33.121 10:26:48 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:33.121 10:26:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.121 ************************************ 00:07:33.121 START TEST accel_dif_verify 00:07:33.121 ************************************ 00:07:33.121 10:26:48 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:33.121 10:26:48 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:33.121 [2024-05-15 10:26:48.394572] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:07:33.121 [2024-05-15 10:26:48.394678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509722 ] 00:07:33.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.121 [2024-05-15 10:26:48.511574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.121 [2024-05-15 10:26:48.614270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.121 [2024-05-15 10:26:48.618771] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:33.121 [2024-05-15 10:26:48.626731] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dsa 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=dsa 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:39.707 10:26:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:39.708 10:26:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:39.708 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:39.708 10:26:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:42.250 10:26:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:42.250 00:07:42.250 real 0m9.677s 00:07:42.251 user 0m3.271s 00:07:42.251 sys 0m0.240s 00:07:42.251 10:26:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:42.251 10:26:58 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 ************************************ 00:07:42.251 END TEST accel_dif_verify 00:07:42.251 ************************************ 00:07:42.251 10:26:58 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:42.251 10:26:58 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:07:42.251 10:26:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:42.251 10:26:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 ************************************ 00:07:42.251 START TEST accel_dif_generate 00:07:42.251 ************************************ 00:07:42.251 10:26:58 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:42.251 10:26:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:42.251 [2024-05-15 10:26:58.120880] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:07:42.251 [2024-05-15 10:26:58.120953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511717 ] 00:07:42.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.512 [2024-05-15 10:26:58.217972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.512 [2024-05-15 10:26:58.317726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.512 [2024-05-15 10:26:58.322300] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:42.512 [2024-05-15 10:26:58.330260] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.139 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.140 10:27:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:52.436 10:27:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.436 00:07:52.436 real 0m9.640s 00:07:52.436 user 0m3.242s 00:07:52.436 sys 0m0.219s 00:07:52.436 10:27:07 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:52.436 10:27:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:52.436 ************************************ 00:07:52.436 END TEST accel_dif_generate 00:07:52.436 ************************************ 00:07:52.436 10:27:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:52.436 10:27:07 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:07:52.436 10:27:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:52.436 10:27:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.436 ************************************ 00:07:52.436 START TEST accel_dif_generate_copy 00:07:52.436 ************************************ 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:52.436 10:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:52.436 [2024-05-15 10:27:07.828508] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:07:52.436 [2024-05-15 10:27:07.828617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514147 ] 00:07:52.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.436 [2024-05-15 10:27:07.942495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.436 [2024-05-15 10:27:08.040681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.436 [2024-05-15 10:27:08.045213] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:52.436 [2024-05-15 10:27:08.053176] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dsa 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=dsa 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.016 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:59.017 10:27:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:02.314 00:08:02.314 real 0m9.667s 00:08:02.314 user 0m3.292s 00:08:02.314 sys 0m0.211s 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:02.314 10:27:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:02.314 ************************************ 00:08:02.314 END TEST accel_dif_generate_copy 00:08:02.314 ************************************ 00:08:02.314 10:27:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:02.314 10:27:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:02.314 10:27:17 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:08:02.314 10:27:17 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:02.314 10:27:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.314 ************************************ 00:08:02.314 START TEST accel_comp 00:08:02.314 ************************************ 00:08:02.315 10:27:17 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:02.315 10:27:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:02.315 [2024-05-15 10:27:17.550601] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:08:02.315 [2024-05-15 10:27:17.550702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515961 ] 00:08:02.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.315 [2024-05-15 10:27:17.660207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.315 [2024-05-15 10:27:17.758821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.315 [2024-05-15 10:27:17.763293] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:02.315 [2024-05-15 10:27:17.771262] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.892 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=iaa 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=iaa 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.893 10:27:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.433 10:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.434 10:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.434 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.434 10:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.434 10:27:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:11.434 10:27:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:11.434 10:27:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:11.434 00:08:11.434 real 0m9.686s 00:08:11.434 user 0m3.326s 00:08:11.434 sys 0m0.186s 00:08:11.434 10:27:27 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:11.434 10:27:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:11.434 ************************************ 00:08:11.434 END TEST accel_comp 00:08:11.434 ************************************ 00:08:11.434 10:27:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:11.434 10:27:27 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:08:11.434 10:27:27 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:11.434 10:27:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.434 ************************************ 00:08:11.434 START TEST accel_decomp 00:08:11.434 ************************************ 00:08:11.434 10:27:27 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:11.434 10:27:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:11.434 [2024-05-15 10:27:27.291754] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:08:11.434 [2024-05-15 10:27:27.291857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518016 ] 00:08:11.694 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.694 [2024-05-15 10:27:27.402937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.694 [2024-05-15 10:27:27.500536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.694 [2024-05-15 10:27:27.505012] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:11.694 [2024-05-15 10:27:27.512976] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=iaa 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=iaa 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:18.325 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.326 10:27:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.618 10:27:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.618 10:27:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.618 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.618 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.618 10:27:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.619 10:27:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:21.619 00:08:21.619 real 0m9.645s 00:08:21.619 user 0m3.269s 00:08:21.619 sys 0m0.218s 00:08:21.619 10:27:36 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:21.619 10:27:36 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:21.619 ************************************ 00:08:21.619 END TEST accel_decomp 00:08:21.619 ************************************ 00:08:21.619 10:27:36 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.619 10:27:36 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:08:21.619 10:27:36 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:21.619 10:27:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.619 ************************************ 00:08:21.619 START TEST accel_decmop_full 00:08:21.619 ************************************ 00:08:21.619 10:27:36 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:21.619 10:27:36 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:21.619 [2024-05-15 10:27:36.984375] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:08:21.619 [2024-05-15 10:27:36.984447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519817 ] 00:08:21.619 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.619 [2024-05-15 10:27:37.071422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.619 [2024-05-15 10:27:37.169960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.619 [2024-05-15 10:27:37.174459] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:21.619 [2024-05-15 10:27:37.182422] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=iaa 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.192 10:27:43 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=iaa 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.193 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.730 10:27:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:30.730 00:08:30.730 real 0m9.637s 00:08:30.730 user 0m3.276s 00:08:30.730 sys 0m0.188s 00:08:30.730 10:27:46 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:30.730 10:27:46 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:30.730 ************************************ 00:08:30.730 END TEST accel_decmop_full 00:08:30.730 ************************************ 00:08:30.990 10:27:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.990 10:27:46 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:08:30.990 10:27:46 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:30.990 10:27:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.990 ************************************ 00:08:30.990 START TEST accel_decomp_mcore 00:08:30.990 ************************************ 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:30.990 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:30.990 [2024-05-15 10:27:46.695766] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:08:30.990 [2024-05-15 10:27:46.695875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521813 ] 00:08:30.990 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.990 [2024-05-15 10:27:46.813503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.249 [2024-05-15 10:27:46.915800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.249 [2024-05-15 10:27:46.915902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.249 [2024-05-15 10:27:46.916002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.249 [2024-05-15 10:27:46.916010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.249 [2024-05-15 10:27:46.920560] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:31.249 [2024-05-15 10:27:46.928523] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=iaa 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=iaa 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.819 10:27:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:41.112 00:08:41.112 real 0m9.719s 00:08:41.112 user 0m31.091s 00:08:41.112 sys 0m0.252s 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:41.112 10:27:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:41.112 ************************************ 00:08:41.112 END TEST accel_decomp_mcore 00:08:41.112 ************************************ 00:08:41.112 10:27:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:41.112 10:27:56 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:08:41.112 10:27:56 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:41.112 10:27:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.112 ************************************ 00:08:41.112 START TEST accel_decomp_full_mcore 00:08:41.112 ************************************ 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:41.112 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:41.113 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:41.113 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.113 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.113 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:41.113 10:27:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:41.113 [2024-05-15 10:27:56.471648] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:08:41.113 [2024-05-15 10:27:56.471755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523686 ] 00:08:41.113 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.113 [2024-05-15 10:27:56.588280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.113 [2024-05-15 10:27:56.690080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.113 [2024-05-15 10:27:56.690157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.113 [2024-05-15 10:27:56.690258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.113 [2024-05-15 10:27:56.690266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.113 [2024-05-15 10:27:56.694812] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:41.113 [2024-05-15 10:27:56.702767] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:47.749 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.749 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.749 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.749 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.749 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=iaa 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=iaa 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:47.750 10:28:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:50.286 00:08:50.286 real 0m9.697s 00:08:50.286 user 0m31.040s 00:08:50.286 sys 0m0.240s 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:50.286 10:28:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:50.286 ************************************ 00:08:50.286 END TEST accel_decomp_full_mcore 00:08:50.286 ************************************ 00:08:50.286 10:28:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:50.286 10:28:06 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:08:50.286 10:28:06 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:50.286 10:28:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.546 ************************************ 00:08:50.546 START TEST accel_decomp_mthread 00:08:50.546 ************************************ 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:50.546 10:28:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:50.546 [2024-05-15 10:28:06.215742] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:08:50.546 [2024-05-15 10:28:06.215853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525490 ] 00:08:50.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.546 [2024-05-15 10:28:06.332097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.806 [2024-05-15 10:28:06.431567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.806 [2024-05-15 10:28:06.436087] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:50.806 [2024-05-15 10:28:06.444040] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:57.380 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.380 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.380 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.380 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.380 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.380 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=iaa 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=iaa 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.381 10:28:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.671 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:00.672 00:09:00.672 real 0m9.675s 00:09:00.672 user 0m3.281s 00:09:00.672 sys 0m0.230s 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:00.672 10:28:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:00.672 ************************************ 00:09:00.672 END TEST accel_decomp_mthread 00:09:00.672 ************************************ 00:09:00.672 10:28:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:00.672 10:28:15 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:09:00.672 10:28:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:00.672 10:28:15 accel -- common/autotest_common.sh@10 -- # set +x 00:09:00.672 ************************************ 00:09:00.672 START TEST accel_decomp_full_mthread 00:09:00.672 ************************************ 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:00.672 10:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:00.672 [2024-05-15 10:28:15.960201] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:09:00.672 [2024-05-15 10:28:15.960330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527555 ] 00:09:00.672 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.672 [2024-05-15 10:28:16.088983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.672 [2024-05-15 10:28:16.188663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.672 [2024-05-15 10:28:16.193188] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:00.672 [2024-05-15 10:28:16.201148] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=iaa 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=iaa 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.257 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.258 10:28:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:09.800 00:09:09.800 real 0m9.739s 00:09:09.800 user 0m3.301s 00:09:09.800 sys 0m0.261s 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:09.800 10:28:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:09.800 ************************************ 00:09:09.800 END TEST accel_decomp_full_mthread 00:09:09.800 ************************************ 00:09:10.061 10:28:25 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:10.061 10:28:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:10.061 10:28:25 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:09:10.061 10:28:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:10.061 10:28:25 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 10:28:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:10.061 10:28:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.061 10:28:25 accel -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:10.061 10:28:25 accel -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:10.061 10:28:25 accel -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:10.061 10:28:25 accel -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:10.061 10:28:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.061 10:28:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.061 10:28:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:10.061 10:28:25 accel -- accel/accel.sh@41 -- # jq -r . 00:09:10.061 ************************************ 00:09:10.061 START TEST accel_dif_functional_tests 00:09:10.061 ************************************ 00:09:10.061 10:28:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:10.061 [2024-05-15 10:28:25.783037] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:09:10.061 [2024-05-15 10:28:25.783148] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529353 ] 00:09:10.061 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.061 [2024-05-15 10:28:25.901361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:10.322 [2024-05-15 10:28:26.000282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.322 [2024-05-15 10:28:26.000374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.322 [2024-05-15 10:28:26.000379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.322 [2024-05-15 10:28:26.004943] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:10.322 [2024-05-15 10:28:26.012906] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:18.511 00:09:18.511 00:09:18.511 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.511 http://cunit.sourceforge.net/ 00:09:18.511 00:09:18.511 00:09:18.511 Suite: accel_dif 00:09:18.511 Test: verify: DIF generated, GUARD check ...passed 00:09:18.511 Test: verify: DIF generated, APPTAG check ...passed 00:09:18.511 Test: verify: DIF generated, REFTAG check ...passed 00:09:18.511 Test: verify: DIF not generated, GUARD check ...[2024-05-15 10:28:32.939285] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:18.511 [2024-05-15 10:28:32.939340] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 10:28:32.939353] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939363] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939369] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939377] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939383] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.939393] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.939400] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 [2024-05-15 10:28:32.939427] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:18.512 [2024-05-15 10:28:32.939439] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:09:18.512 [2024-05-15 10:28:32.939465] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:18.512 passed 00:09:18.512 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 10:28:32.939519] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:18.512 [2024-05-15 10:28:32.939531] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 10:28:32.939543] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939550] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939559] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939571] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939579] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.939585] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.939593] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 [2024-05-15 10:28:32.939601] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:18.512 [2024-05-15 10:28:32.939610] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:09:18.512 [2024-05-15 10:28:32.939625] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:18.512 passed 00:09:18.512 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 10:28:32.939656] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:18.512 [2024-05-15 10:28:32.939669] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 10:28:32.939676] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939685] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939690] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939698] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939704] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.939715] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.939722] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 [2024-05-15 10:28:32.939733] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:18.512 [2024-05-15 10:28:32.939744] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:09:18.512 [2024-05-15 10:28:32.939765] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:18.512 passed 00:09:18.512 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:18.512 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 10:28:32.939835] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:18.512 [2024-05-15 10:28:32.939846] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 10:28:32.939854] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939860] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939869] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939876] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.939884] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.939891] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.939901] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 [2024-05-15 10:28:32.939910] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:18.512 [2024-05-15 10:28:32.939919] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:09:18.512 passed 00:09:18.512 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:18.512 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:18.512 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:18.512 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 10:28:32.940074] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:18.512 [2024-05-15 10:28:32.940087] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 10:28:32.940093] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940101] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940108] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940116] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940127] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.940138] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.940144] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 [2024-05-15 10:28:32.940154] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:18.512 [2024-05-15 10:28:32.940161] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 10:28:32.940168] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940184] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940192] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940199] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940206] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.940212] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.940221] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 [2024-05-15 10:28:32.940229] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:18.512 [2024-05-15 10:28:32.940238] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:09:18.512 [2024-05-15 10:28:32.940248] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:09:18.512 passed[2024-05-15 10:28:32.940257] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw: 00:09:18.512 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 10:28:32.940265] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940274] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940280] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940289] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:18.512 [2024-05-15 10:28:32.940295] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:18.512 [2024-05-15 10:28:32.940304] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:18.512 [2024-05-15 10:28:32.940310] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:18.512 passed 00:09:18.512 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:18.512 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:18.512 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-05-15 10:28:32.940438] idxd.c:1565:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:09:18.512 passed 00:09:18.512 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-05-15 10:28:32.940476] idxd.c:1570:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:09:18.512 passed 00:09:18.512 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-05-15 10:28:32.940512] idxd.c:1575:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:09:18.512 passed 00:09:18.512 Test: generate copy: iovecs-len validate ...[2024-05-15 10:28:32.940548] idxd.c:1602:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:09:18.512 passed 00:09:18.512 Test: generate copy: buffer alignment validate ...passed 00:09:18.512 00:09:18.512 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.512 suites 1 1 n/a 0 0 00:09:18.512 tests 20 20 20 0 0 00:09:18.512 asserts 204 204 204 0 n/a 00:09:18.512 00:09:18.512 Elapsed time = 0.003 seconds 00:09:19.449 00:09:19.449 real 0m9.554s 00:09:19.449 user 0m20.154s 00:09:19.449 sys 0m0.291s 00:09:19.449 10:28:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:19.449 10:28:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:19.449 ************************************ 00:09:19.449 END TEST accel_dif_functional_tests 00:09:19.449 ************************************ 00:09:19.449 00:09:19.449 real 3m52.739s 00:09:19.449 user 2m30.875s 00:09:19.449 sys 0m7.044s 00:09:19.449 10:28:35 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:19.449 10:28:35 accel -- common/autotest_common.sh@10 -- # set +x 00:09:19.449 ************************************ 00:09:19.449 END TEST accel 00:09:19.449 ************************************ 00:09:19.710 10:28:35 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:19.710 10:28:35 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:09:19.710 10:28:35 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:19.710 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:09:19.710 ************************************ 00:09:19.710 START TEST accel_rpc 00:09:19.710 ************************************ 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:19.710 * Looking for test storage... 00:09:19.710 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:09:19.710 10:28:35 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:19.710 10:28:35 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2531193 00:09:19.710 10:28:35 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2531193 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 2531193 ']' 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:19.710 10:28:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.711 10:28:35 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:19.711 [2024-05-15 10:28:35.485451] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:09:19.711 [2024-05-15 10:28:35.485534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531193 ] 00:09:19.711 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.711 [2024-05-15 10:28:35.574930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.970 [2024-05-15 10:28:35.685064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.540 10:28:36 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:20.540 10:28:36 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:09:20.540 10:28:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:20.540 10:28:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:09:20.540 10:28:36 accel_rpc -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:09:20.540 10:28:36 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:09:20.540 10:28:36 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:20.540 10:28:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 ************************************ 00:09:20.540 START TEST accel_scan_dsa_modules 00:09:20.540 ************************************ 00:09:20.540 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@1122 -- # accel_scan_dsa_modules_test_suite 00:09:20.540 10:28:36 accel_rpc.accel_scan_dsa_modules -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:09:20.540 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.540 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 [2024-05-15 10:28:36.225602] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:20.540 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.540 10:28:36 accel_rpc.accel_scan_dsa_modules -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@649 -- # local es=0 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@652 -- # rpc_cmd dsa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 request: 00:09:20.541 { 00:09:20.541 "method": "dsa_scan_accel_module", 00:09:20.541 "req_id": 1 00:09:20.541 } 00:09:20.541 Got JSON-RPC error response 00:09:20.541 response: 00:09:20.541 { 00:09:20.541 "code": -114, 00:09:20.541 "message": "Operation already in progress" 00:09:20.541 } 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@652 -- # es=1 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:20.541 00:09:20.541 real 0m0.021s 00:09:20.541 user 0m0.004s 00:09:20.541 sys 0m0.003s 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 ************************************ 00:09:20.541 END TEST accel_scan_dsa_modules 00:09:20.541 ************************************ 00:09:20.541 10:28:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:20.541 10:28:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:09:20.541 10:28:36 accel_rpc -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:09:20.541 10:28:36 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:09:20.541 10:28:36 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 ************************************ 00:09:20.541 START TEST accel_scan_iaa_modules 00:09:20.541 ************************************ 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@1122 -- # accel_scan_iaa_modules_test_suite 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 [2024-05-15 10:28:36.293589] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@649 -- # local es=0 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@652 -- # rpc_cmd iaa_scan_accel_module 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 request: 00:09:20.541 { 00:09:20.541 "method": "iaa_scan_accel_module", 00:09:20.541 "req_id": 1 00:09:20.541 } 00:09:20.541 Got JSON-RPC error response 00:09:20.541 response: 00:09:20.541 { 00:09:20.541 "code": -114, 00:09:20.541 "message": "Operation already in progress" 00:09:20.541 } 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@652 -- # es=1 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:20.541 00:09:20.541 real 0m0.020s 00:09:20.541 user 0m0.004s 00:09:20.541 sys 0m0.001s 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 ************************************ 00:09:20.541 END TEST accel_scan_iaa_modules 00:09:20.541 ************************************ 00:09:20.541 10:28:36 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:20.541 10:28:36 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:09:20.541 10:28:36 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 ************************************ 00:09:20.541 START TEST accel_assign_opcode 00:09:20.541 ************************************ 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 [2024-05-15 10:28:36.365615] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:20.541 [2024-05-15 10:28:36.373616] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.541 10:28:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.665 software 00:09:28.665 00:09:28.665 real 0m7.177s 00:09:28.665 user 0m0.036s 00:09:28.665 sys 0m0.008s 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:28.665 10:28:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:28.665 ************************************ 00:09:28.665 END TEST accel_assign_opcode 00:09:28.665 ************************************ 00:09:28.665 10:28:43 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2531193 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 2531193 ']' 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 2531193 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2531193 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2531193' 00:09:28.665 killing process with pid 2531193 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@966 -- # kill 2531193 00:09:28.665 10:28:43 accel_rpc -- common/autotest_common.sh@971 -- # wait 2531193 00:09:30.567 00:09:30.567 real 0m11.057s 00:09:30.567 user 0m4.063s 00:09:30.567 sys 0m0.638s 00:09:30.567 10:28:46 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:30.567 10:28:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.567 ************************************ 00:09:30.567 END TEST accel_rpc 00:09:30.567 ************************************ 00:09:30.827 10:28:46 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:09:30.827 10:28:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:09:30.827 10:28:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:30.827 10:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 ************************************ 00:09:30.827 START TEST app_cmdline 00:09:30.827 ************************************ 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:09:30.827 * Looking for test storage... 00:09:30.827 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:09:30.827 10:28:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:30.827 10:28:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2533650 00:09:30.827 10:28:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2533650 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 2533650 ']' 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:30.827 10:28:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 10:28:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:30.827 [2024-05-15 10:28:46.647728] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:09:30.827 [2024-05-15 10:28:46.647861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533650 ] 00:09:31.087 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.087 [2024-05-15 10:28:46.777812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.087 [2024-05-15 10:28:46.869486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.654 10:28:47 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:31.654 10:28:47 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:09:31.654 10:28:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:31.654 { 00:09:31.654 "version": "SPDK v24.05-pre git sha1 0e4f7fc9b", 00:09:31.654 "fields": { 00:09:31.654 "major": 24, 00:09:31.654 "minor": 5, 00:09:31.654 "patch": 0, 00:09:31.654 "suffix": "-pre", 00:09:31.654 "commit": "0e4f7fc9b" 00:09:31.654 } 00:09:31.654 } 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:31.913 request: 00:09:31.913 { 00:09:31.913 "method": "env_dpdk_get_mem_stats", 00:09:31.913 "req_id": 1 00:09:31.913 } 00:09:31.913 Got JSON-RPC error response 00:09:31.913 response: 00:09:31.913 { 00:09:31.913 "code": -32601, 00:09:31.913 "message": "Method not found" 00:09:31.913 } 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:31.913 10:28:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2533650 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 2533650 ']' 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 2533650 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2533650 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2533650' 00:09:31.913 killing process with pid 2533650 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@966 -- # kill 2533650 00:09:31.913 10:28:47 app_cmdline -- common/autotest_common.sh@971 -- # wait 2533650 00:09:32.849 00:09:32.849 real 0m2.083s 00:09:32.849 user 0m2.267s 00:09:32.849 sys 0m0.485s 00:09:32.850 10:28:48 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:32.850 10:28:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:32.850 ************************************ 00:09:32.850 END TEST app_cmdline 00:09:32.850 ************************************ 00:09:32.850 10:28:48 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:09:32.850 10:28:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:09:32.850 10:28:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:32.850 10:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:32.850 ************************************ 00:09:32.850 START TEST version 00:09:32.850 ************************************ 00:09:32.850 10:28:48 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:09:32.850 * Looking for test storage... 00:09:32.850 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:09:32.850 10:28:48 version -- app/version.sh@17 -- # get_header_version major 00:09:32.850 10:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:32.850 10:28:48 version -- app/version.sh@14 -- # cut -f2 00:09:32.850 10:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.107 10:28:48 version -- app/version.sh@17 -- # major=24 00:09:33.107 10:28:48 version -- app/version.sh@18 -- # get_header_version minor 00:09:33.107 10:28:48 version -- app/version.sh@14 -- # cut -f2 00:09:33.107 10:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:33.107 10:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.107 10:28:48 version -- app/version.sh@18 -- # minor=5 00:09:33.107 10:28:48 version -- app/version.sh@19 -- # get_header_version patch 00:09:33.107 10:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:33.107 10:28:48 version -- app/version.sh@14 -- # cut -f2 00:09:33.107 10:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.107 10:28:48 version -- app/version.sh@19 -- # patch=0 00:09:33.107 10:28:48 version -- app/version.sh@20 -- # get_header_version suffix 00:09:33.107 10:28:48 version -- app/version.sh@14 -- # cut -f2 00:09:33.107 10:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:33.107 10:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.107 10:28:48 version -- app/version.sh@20 -- # suffix=-pre 00:09:33.107 10:28:48 version -- app/version.sh@22 -- # version=24.5 00:09:33.107 10:28:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:33.107 10:28:48 version -- app/version.sh@28 -- # version=24.5rc0 00:09:33.107 10:28:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:09:33.107 10:28:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:33.107 10:28:48 version -- app/version.sh@30 -- # py_version=24.5rc0 00:09:33.107 10:28:48 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:09:33.107 00:09:33.107 real 0m0.132s 00:09:33.108 user 0m0.065s 00:09:33.108 sys 0m0.096s 00:09:33.108 10:28:48 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:33.108 10:28:48 version -- common/autotest_common.sh@10 -- # set +x 00:09:33.108 ************************************ 00:09:33.108 END TEST version 00:09:33.108 ************************************ 00:09:33.108 10:28:48 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@194 -- # uname -s 00:09:33.108 10:28:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:33.108 10:28:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:33.108 10:28:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:33.108 10:28:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:33.108 10:28:48 -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:33.108 10:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:33.108 10:28:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:09:33.108 10:28:48 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:09:33.108 10:28:48 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:33.108 10:28:48 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:33.108 10:28:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:33.108 10:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:33.108 ************************************ 00:09:33.108 START TEST nvmf_tcp 00:09:33.108 ************************************ 00:09:33.108 10:28:48 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:33.108 * Looking for test storage... 00:09:33.108 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:09:33.108 10:28:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.108 10:28:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.108 10:28:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.108 10:28:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.108 10:28:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.108 10:28:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.108 10:28:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:09:33.108 10:28:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:33.108 10:28:48 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:33.108 10:28:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:33.108 10:28:48 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.108 10:28:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:33.108 10:28:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:33.108 10:28:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.366 ************************************ 00:09:33.366 START TEST nvmf_example 00:09:33.366 ************************************ 00:09:33.366 10:28:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.366 * Looking for test storage... 00:09:33.366 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:33.366 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.367 10:28:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:38.637 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.637 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.637 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.637 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.637 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.637 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:09:38.638 Found 0000:27:00.0 (0x8086 - 0x159b) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:09:38.638 Found 0000:27:00.1 (0x8086 - 0x159b) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:09:38.638 Found net devices under 0000:27:00.0: cvl_0_0 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:09:38.638 Found net devices under 0000:27:00.1: cvl_0_1 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.638 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:09:38.898 00:09:38.898 --- 10.0.0.2 ping statistics --- 00:09:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.898 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:09:38.898 00:09:38.898 --- 10.0.0.1 ping statistics --- 00:09:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.898 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2537636 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2537636 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 2537636 ']' 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:38.898 10:28:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:38.898 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.531 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:39.791 10:28:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:39.791 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.020 Initializing NVMe Controllers 00:09:52.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:52.020 Initialization complete. Launching workers. 00:09:52.020 ======================================================== 00:09:52.020 Latency(us) 00:09:52.020 Device Information : IOPS MiB/s Average min max 00:09:52.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18786.10 73.38 3408.42 703.62 16325.35 00:09:52.020 ======================================================== 00:09:52.020 Total : 18786.10 73.38 3408.42 703.62 16325.35 00:09:52.020 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.020 rmmod nvme_tcp 00:09:52.020 rmmod nvme_fabrics 00:09:52.020 rmmod nvme_keyring 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2537636 ']' 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2537636 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 2537636 ']' 00:09:52.020 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 2537636 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2537636 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2537636' 00:09:52.021 killing process with pid 2537636 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 2537636 00:09:52.021 10:29:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 2537636 00:09:52.021 nvmf threads initialize successfully 00:09:52.021 bdev subsystem init successfully 00:09:52.021 created a nvmf target service 00:09:52.021 create targets's poll groups done 00:09:52.021 all subsystems of target started 00:09:52.021 nvmf target is running 00:09:52.021 all subsystems of target stopped 00:09:52.021 destroy targets's poll groups done 00:09:52.021 destroyed the nvmf target service 00:09:52.021 bdev subsystem finish successfully 00:09:52.021 nvmf threads destroy successfully 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.021 10:29:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.966 10:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.966 10:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:52.966 10:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:52.966 10:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.966 00:09:52.966 real 0m19.564s 00:09:52.966 user 0m46.828s 00:09:52.966 sys 0m5.238s 00:09:52.966 10:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:52.966 10:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.966 ************************************ 00:09:52.966 END TEST nvmf_example 00:09:52.966 ************************************ 00:09:52.966 10:29:08 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:52.966 10:29:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:52.966 10:29:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:52.966 10:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.966 ************************************ 00:09:52.966 START TEST nvmf_filesystem 00:09:52.966 ************************************ 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:52.966 * Looking for test storage... 00:09:52.966 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:52.966 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:52.967 #define SPDK_CONFIG_H 00:09:52.967 #define SPDK_CONFIG_APPS 1 00:09:52.967 #define SPDK_CONFIG_ARCH native 00:09:52.967 #define SPDK_CONFIG_ASAN 1 00:09:52.967 #undef SPDK_CONFIG_AVAHI 00:09:52.967 #undef SPDK_CONFIG_CET 00:09:52.967 #define SPDK_CONFIG_COVERAGE 1 00:09:52.967 #define SPDK_CONFIG_CROSS_PREFIX 00:09:52.967 #undef SPDK_CONFIG_CRYPTO 00:09:52.967 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:52.967 #undef SPDK_CONFIG_CUSTOMOCF 00:09:52.967 #undef SPDK_CONFIG_DAOS 00:09:52.967 #define SPDK_CONFIG_DAOS_DIR 00:09:52.967 #define SPDK_CONFIG_DEBUG 1 00:09:52.967 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:52.967 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:09:52.967 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:52.967 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:52.967 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:52.967 #undef SPDK_CONFIG_DPDK_UADK 00:09:52.967 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:09:52.967 #define SPDK_CONFIG_EXAMPLES 1 00:09:52.967 #undef SPDK_CONFIG_FC 00:09:52.967 #define SPDK_CONFIG_FC_PATH 00:09:52.967 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:52.967 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:52.967 #undef SPDK_CONFIG_FUSE 00:09:52.967 #undef SPDK_CONFIG_FUZZER 00:09:52.967 #define SPDK_CONFIG_FUZZER_LIB 00:09:52.967 #undef SPDK_CONFIG_GOLANG 00:09:52.967 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:52.967 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:52.967 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:52.967 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:09:52.967 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:52.967 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:52.967 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:52.967 #define SPDK_CONFIG_IDXD 1 00:09:52.967 #undef SPDK_CONFIG_IDXD_KERNEL 00:09:52.967 #undef SPDK_CONFIG_IPSEC_MB 00:09:52.967 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:52.967 #define SPDK_CONFIG_ISAL 1 00:09:52.967 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:52.967 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:52.967 #define SPDK_CONFIG_LIBDIR 00:09:52.967 #undef SPDK_CONFIG_LTO 00:09:52.967 #define SPDK_CONFIG_MAX_LCORES 00:09:52.967 #define SPDK_CONFIG_NVME_CUSE 1 00:09:52.967 #undef SPDK_CONFIG_OCF 00:09:52.967 #define SPDK_CONFIG_OCF_PATH 00:09:52.967 #define SPDK_CONFIG_OPENSSL_PATH 00:09:52.967 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:52.967 #define SPDK_CONFIG_PGO_DIR 00:09:52.967 #undef SPDK_CONFIG_PGO_USE 00:09:52.967 #define SPDK_CONFIG_PREFIX /usr/local 00:09:52.967 #undef SPDK_CONFIG_RAID5F 00:09:52.967 #undef SPDK_CONFIG_RBD 00:09:52.967 #define SPDK_CONFIG_RDMA 1 00:09:52.967 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:52.967 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:52.967 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:52.967 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:52.967 #define SPDK_CONFIG_SHARED 1 00:09:52.967 #undef SPDK_CONFIG_SMA 00:09:52.967 #define SPDK_CONFIG_TESTS 1 00:09:52.967 #undef SPDK_CONFIG_TSAN 00:09:52.967 #define SPDK_CONFIG_UBLK 1 00:09:52.967 #define SPDK_CONFIG_UBSAN 1 00:09:52.967 #undef SPDK_CONFIG_UNIT_TESTS 00:09:52.967 #undef SPDK_CONFIG_URING 00:09:52.967 #define SPDK_CONFIG_URING_PATH 00:09:52.967 #undef SPDK_CONFIG_URING_ZNS 00:09:52.967 #undef SPDK_CONFIG_USDT 00:09:52.967 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:52.967 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:52.967 #undef SPDK_CONFIG_VFIO_USER 00:09:52.967 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:52.967 #define SPDK_CONFIG_VHOST 1 00:09:52.967 #define SPDK_CONFIG_VIRTIO 1 00:09:52.967 #undef SPDK_CONFIG_VTUNE 00:09:52.967 #define SPDK_CONFIG_VTUNE_DIR 00:09:52.967 #define SPDK_CONFIG_WERROR 1 00:09:52.967 #define SPDK_CONFIG_WPDK_DIR 00:09:52.967 #undef SPDK_CONFIG_XNVME 00:09:52.967 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.967 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power ]] 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:52.968 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 1 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 1 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:52.969 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j128 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2540411 ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2540411 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.DU6ra3 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DU6ra3/tests/target /tmp/spdk.DU6ra3 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=972197888 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4312231936 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=123695443968 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129472483328 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5777039360 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64731529216 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64736239616 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25884815360 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25894498304 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9682944 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=66560 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=437248 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64735444992 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64736243712 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=798720 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12947243008 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12947247104 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:52.970 * Looking for test storage... 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=123695443968 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:52.970 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7991631872 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:52.971 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:52.971 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.972 10:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:09:59.550 Found 0000:27:00.0 (0x8086 - 0x159b) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:09:59.550 Found 0000:27:00.1 (0x8086 - 0x159b) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:09:59.550 Found net devices under 0000:27:00.0: cvl_0_0 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:09:59.550 Found net devices under 0000:27:00.1: cvl_0_1 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.550 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:09:59.551 00:09:59.551 --- 10.0.0.2 ping statistics --- 00:09:59.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.551 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:09:59.551 00:09:59.551 --- 10.0.0.1 ping statistics --- 00:09:59.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.551 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.551 ************************************ 00:09:59.551 START TEST nvmf_filesystem_no_in_capsule 00:09:59.551 ************************************ 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2543944 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2543944 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2543944 ']' 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.551 10:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.551 [2024-05-15 10:29:14.644564] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:09:59.551 [2024-05-15 10:29:14.644669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.551 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.551 [2024-05-15 10:29:14.771038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.551 [2024-05-15 10:29:14.867411] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.551 [2024-05-15 10:29:14.867448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.551 [2024-05-15 10:29:14.867458] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.551 [2024-05-15 10:29:14.867468] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.551 [2024-05-15 10:29:14.867476] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.551 [2024-05-15 10:29:14.867630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.551 [2024-05-15 10:29:14.867727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.551 [2024-05-15 10:29:14.867827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.551 [2024-05-15 10:29:14.867838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.551 [2024-05-15 10:29:15.397243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.551 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.812 Malloc1 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.812 [2024-05-15 10:29:15.675545] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:59.812 [2024-05-15 10:29:15.675877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.812 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:10:00.072 { 00:10:00.072 "name": "Malloc1", 00:10:00.072 "aliases": [ 00:10:00.072 "e082ce6d-d4d3-41a4-8212-5c565e7ad0ad" 00:10:00.072 ], 00:10:00.072 "product_name": "Malloc disk", 00:10:00.072 "block_size": 512, 00:10:00.072 "num_blocks": 1048576, 00:10:00.072 "uuid": "e082ce6d-d4d3-41a4-8212-5c565e7ad0ad", 00:10:00.072 "assigned_rate_limits": { 00:10:00.072 "rw_ios_per_sec": 0, 00:10:00.072 "rw_mbytes_per_sec": 0, 00:10:00.072 "r_mbytes_per_sec": 0, 00:10:00.072 "w_mbytes_per_sec": 0 00:10:00.072 }, 00:10:00.072 "claimed": true, 00:10:00.072 "claim_type": "exclusive_write", 00:10:00.072 "zoned": false, 00:10:00.072 "supported_io_types": { 00:10:00.072 "read": true, 00:10:00.072 "write": true, 00:10:00.072 "unmap": true, 00:10:00.072 "write_zeroes": true, 00:10:00.072 "flush": true, 00:10:00.072 "reset": true, 00:10:00.072 "compare": false, 00:10:00.072 "compare_and_write": false, 00:10:00.072 "abort": true, 00:10:00.072 "nvme_admin": false, 00:10:00.072 "nvme_io": false 00:10:00.072 }, 00:10:00.072 "memory_domains": [ 00:10:00.072 { 00:10:00.072 "dma_device_id": "system", 00:10:00.072 "dma_device_type": 1 00:10:00.072 }, 00:10:00.072 { 00:10:00.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.072 "dma_device_type": 2 00:10:00.072 } 00:10:00.072 ], 00:10:00.072 "driver_specific": {} 00:10:00.072 } 00:10:00.072 ]' 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:00.072 10:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:01.453 10:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.453 10:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:10:01.453 10:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.453 10:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:10:01.453 10:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:03.991 10:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:04.562 10:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.503 ************************************ 00:10:05.503 START TEST filesystem_ext4 00:10:05.503 ************************************ 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:10:05.503 10:29:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:05.503 mke2fs 1.46.5 (30-Dec-2021) 00:10:05.503 Discarding device blocks: 0/522240 done 00:10:05.503 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:05.503 Filesystem UUID: 6d5eff37-3158-4114-8e1b-e380e37aea78 00:10:05.503 Superblock backups stored on blocks: 00:10:05.503 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:05.503 00:10:05.503 Allocating group tables: 0/64 done 00:10:05.503 Writing inode tables: 0/64 done 00:10:06.073 Creating journal (8192 blocks): done 00:10:06.852 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:10:06.852 00:10:06.852 10:29:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:10:06.852 10:29:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.421 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2543944 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.715 00:10:07.715 real 0m2.158s 00:10:07.715 user 0m0.023s 00:10:07.715 sys 0m0.038s 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:07.715 ************************************ 00:10:07.715 END TEST filesystem_ext4 00:10:07.715 ************************************ 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.715 ************************************ 00:10:07.715 START TEST filesystem_btrfs 00:10:07.715 ************************************ 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:10:07.715 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:07.996 btrfs-progs v6.6.2 00:10:07.996 See https://btrfs.readthedocs.io for more information. 00:10:07.996 00:10:07.996 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:07.996 NOTE: several default settings have changed in version 5.15, please make sure 00:10:07.996 this does not affect your deployments: 00:10:07.996 - DUP for metadata (-m dup) 00:10:07.996 - enabled no-holes (-O no-holes) 00:10:07.996 - enabled free-space-tree (-R free-space-tree) 00:10:07.996 00:10:07.996 Label: (null) 00:10:07.996 UUID: d4f531ed-6f7f-4a58-bc69-e9632540e7ea 00:10:07.996 Node size: 16384 00:10:07.996 Sector size: 4096 00:10:07.996 Filesystem size: 510.00MiB 00:10:07.996 Block group profiles: 00:10:07.996 Data: single 8.00MiB 00:10:07.996 Metadata: DUP 32.00MiB 00:10:07.996 System: DUP 8.00MiB 00:10:07.996 SSD detected: yes 00:10:07.996 Zoned device: no 00:10:07.996 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:07.996 Runtime features: free-space-tree 00:10:07.996 Checksum: crc32c 00:10:07.996 Number of devices: 1 00:10:07.996 Devices: 00:10:07.996 ID SIZE PATH 00:10:07.996 1 510.00MiB /dev/nvme0n1p1 00:10:07.996 00:10:07.996 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:10:07.996 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.996 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2543944 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:08.255 00:10:08.255 real 0m0.520s 00:10:08.255 user 0m0.017s 00:10:08.255 sys 0m0.058s 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:08.255 ************************************ 00:10:08.255 END TEST filesystem_btrfs 00:10:08.255 ************************************ 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.255 ************************************ 00:10:08.255 START TEST filesystem_xfs 00:10:08.255 ************************************ 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:10:08.255 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:10:08.256 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:10:08.256 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:10:08.256 10:29:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:08.256 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:08.256 = sectsz=512 attr=2, projid32bit=1 00:10:08.256 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:08.256 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:08.256 data = bsize=4096 blocks=130560, imaxpct=25 00:10:08.256 = sunit=0 swidth=0 blks 00:10:08.256 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:08.256 log =internal log bsize=4096 blocks=16384, version=2 00:10:08.256 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:08.256 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:09.190 Discarding blocks...Done. 00:10:09.190 10:29:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:10:09.190 10:29:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2543944 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.095 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.096 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.096 00:10:11.096 real 0m2.841s 00:10:11.096 user 0m0.015s 00:10:11.096 sys 0m0.051s 00:10:11.096 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:11.096 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:11.096 ************************************ 00:10:11.096 END TEST filesystem_xfs 00:10:11.096 ************************************ 00:10:11.096 10:29:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:11.355 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:11.355 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.355 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.355 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:10:11.356 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:10:11.356 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.356 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:10:11.356 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2543944 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2543944 ']' 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2543944 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2543944 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2543944' 00:10:11.616 killing process with pid 2543944 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 2543944 00:10:11.616 [2024-05-15 10:29:27.295595] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:11.616 10:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 2543944 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:12.554 00:10:12.554 real 0m13.675s 00:10:12.554 user 0m52.838s 00:10:12.554 sys 0m1.052s 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.554 ************************************ 00:10:12.554 END TEST nvmf_filesystem_no_in_capsule 00:10:12.554 ************************************ 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.554 ************************************ 00:10:12.554 START TEST nvmf_filesystem_in_capsule 00:10:12.554 ************************************ 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2546817 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2546817 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2546817 ']' 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:12.554 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.555 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:12.555 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.555 10:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.555 [2024-05-15 10:29:28.403598] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:10:12.555 [2024-05-15 10:29:28.403728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.815 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.815 [2024-05-15 10:29:28.544974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.815 [2024-05-15 10:29:28.644758] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.815 [2024-05-15 10:29:28.644809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.815 [2024-05-15 10:29:28.644820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.815 [2024-05-15 10:29:28.644831] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.815 [2024-05-15 10:29:28.644838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.815 [2024-05-15 10:29:28.644926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.815 [2024-05-15 10:29:28.645019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.815 [2024-05-15 10:29:28.645120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.815 [2024-05-15 10:29:28.645131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.386 [2024-05-15 10:29:29.155587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.386 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 Malloc1 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 [2024-05-15 10:29:29.423647] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:13.645 [2024-05-15 10:29:29.423955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.645 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:10:13.645 { 00:10:13.645 "name": "Malloc1", 00:10:13.645 "aliases": [ 00:10:13.645 "824df3be-94da-4d95-8d56-7c7c4f07ab43" 00:10:13.645 ], 00:10:13.645 "product_name": "Malloc disk", 00:10:13.645 "block_size": 512, 00:10:13.645 "num_blocks": 1048576, 00:10:13.645 "uuid": "824df3be-94da-4d95-8d56-7c7c4f07ab43", 00:10:13.645 "assigned_rate_limits": { 00:10:13.645 "rw_ios_per_sec": 0, 00:10:13.645 "rw_mbytes_per_sec": 0, 00:10:13.645 "r_mbytes_per_sec": 0, 00:10:13.645 "w_mbytes_per_sec": 0 00:10:13.645 }, 00:10:13.645 "claimed": true, 00:10:13.645 "claim_type": "exclusive_write", 00:10:13.645 "zoned": false, 00:10:13.645 "supported_io_types": { 00:10:13.645 "read": true, 00:10:13.645 "write": true, 00:10:13.645 "unmap": true, 00:10:13.645 "write_zeroes": true, 00:10:13.645 "flush": true, 00:10:13.645 "reset": true, 00:10:13.645 "compare": false, 00:10:13.645 "compare_and_write": false, 00:10:13.645 "abort": true, 00:10:13.645 "nvme_admin": false, 00:10:13.645 "nvme_io": false 00:10:13.645 }, 00:10:13.645 "memory_domains": [ 00:10:13.645 { 00:10:13.645 "dma_device_id": "system", 00:10:13.646 "dma_device_type": 1 00:10:13.646 }, 00:10:13.646 { 00:10:13.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.646 "dma_device_type": 2 00:10:13.646 } 00:10:13.646 ], 00:10:13.646 "driver_specific": {} 00:10:13.646 } 00:10:13.646 ]' 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:13.646 10:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.554 10:29:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.554 10:29:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:10:15.554 10:29:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.554 10:29:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:10:15.554 10:29:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:17.467 10:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.467 10:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:17.724 10:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:19.104 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:19.104 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:19.104 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:10:19.104 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:19.104 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.104 ************************************ 00:10:19.104 START TEST filesystem_in_capsule_ext4 00:10:19.104 ************************************ 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:19.105 mke2fs 1.46.5 (30-Dec-2021) 00:10:19.105 Discarding device blocks: 0/522240 done 00:10:19.105 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:19.105 Filesystem UUID: b42b4141-6319-47f5-9506-eb3d3fb3ba97 00:10:19.105 Superblock backups stored on blocks: 00:10:19.105 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:19.105 00:10:19.105 Allocating group tables: 0/64 done 00:10:19.105 Writing inode tables: 0/64 done 00:10:19.105 Creating journal (8192 blocks): done 00:10:19.105 Writing superblocks and filesystem accounting information: 0/64 done 00:10:19.105 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2546817 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.105 00:10:19.105 real 0m0.365s 00:10:19.105 user 0m0.019s 00:10:19.105 sys 0m0.036s 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:19.105 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:19.105 ************************************ 00:10:19.105 END TEST filesystem_in_capsule_ext4 00:10:19.105 ************************************ 00:10:19.365 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:19.365 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:10:19.365 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:19.365 10:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.365 ************************************ 00:10:19.365 START TEST filesystem_in_capsule_btrfs 00:10:19.365 ************************************ 00:10:19.365 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:19.365 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:19.365 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:10:19.366 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:19.627 btrfs-progs v6.6.2 00:10:19.627 See https://btrfs.readthedocs.io for more information. 00:10:19.627 00:10:19.627 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:19.627 NOTE: several default settings have changed in version 5.15, please make sure 00:10:19.627 this does not affect your deployments: 00:10:19.627 - DUP for metadata (-m dup) 00:10:19.627 - enabled no-holes (-O no-holes) 00:10:19.627 - enabled free-space-tree (-R free-space-tree) 00:10:19.627 00:10:19.627 Label: (null) 00:10:19.627 UUID: 929fdc8f-a9f4-48d6-83fc-c6cc222ae466 00:10:19.627 Node size: 16384 00:10:19.627 Sector size: 4096 00:10:19.627 Filesystem size: 510.00MiB 00:10:19.627 Block group profiles: 00:10:19.627 Data: single 8.00MiB 00:10:19.627 Metadata: DUP 32.00MiB 00:10:19.627 System: DUP 8.00MiB 00:10:19.627 SSD detected: yes 00:10:19.627 Zoned device: no 00:10:19.627 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:19.627 Runtime features: free-space-tree 00:10:19.627 Checksum: crc32c 00:10:19.627 Number of devices: 1 00:10:19.627 Devices: 00:10:19.627 ID SIZE PATH 00:10:19.627 1 510.00MiB /dev/nvme0n1p1 00:10:19.627 00:10:19.627 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:10:19.627 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2546817 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.886 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.887 00:10:19.887 real 0m0.686s 00:10:19.887 user 0m0.018s 00:10:19.887 sys 0m0.059s 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:19.887 ************************************ 00:10:19.887 END TEST filesystem_in_capsule_btrfs 00:10:19.887 ************************************ 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:19.887 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.146 ************************************ 00:10:20.146 START TEST filesystem_in_capsule_xfs 00:10:20.146 ************************************ 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:10:20.146 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:10:20.147 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:10:20.147 10:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:20.147 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:20.147 = sectsz=512 attr=2, projid32bit=1 00:10:20.147 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:20.147 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:20.147 data = bsize=4096 blocks=130560, imaxpct=25 00:10:20.147 = sunit=0 swidth=0 blks 00:10:20.147 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:20.147 log =internal log bsize=4096 blocks=16384, version=2 00:10:20.147 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:20.147 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:21.084 Discarding blocks...Done. 00:10:21.084 10:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:10:21.084 10:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2546817 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.625 00:10:23.625 real 0m3.449s 00:10:23.625 user 0m0.015s 00:10:23.625 sys 0m0.049s 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:23.625 ************************************ 00:10:23.625 END TEST filesystem_in_capsule_xfs 00:10:23.625 ************************************ 00:10:23.625 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:23.885 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2546817 00:10:23.886 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2546817 ']' 00:10:23.886 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2546817 00:10:23.886 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:10:23.886 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:23.886 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2546817 00:10:24.147 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:24.147 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:24.147 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2546817' 00:10:24.147 killing process with pid 2546817 00:10:24.147 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 2546817 00:10:24.147 [2024-05-15 10:29:39.786141] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:24.147 10:29:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 2546817 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:25.088 00:10:25.088 real 0m12.428s 00:10:25.088 user 0m47.685s 00:10:25.088 sys 0m1.071s 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.088 ************************************ 00:10:25.088 END TEST nvmf_filesystem_in_capsule 00:10:25.088 ************************************ 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.088 rmmod nvme_tcp 00:10:25.088 rmmod nvme_fabrics 00:10:25.088 rmmod nvme_keyring 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.088 10:29:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.666 10:29:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.666 00:10:27.666 real 0m34.306s 00:10:27.666 user 1m42.089s 00:10:27.666 sys 0m6.596s 00:10:27.666 10:29:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:27.666 10:29:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.666 ************************************ 00:10:27.666 END TEST nvmf_filesystem 00:10:27.666 ************************************ 00:10:27.666 10:29:42 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:27.666 10:29:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:27.666 10:29:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:27.666 10:29:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.666 ************************************ 00:10:27.666 START TEST nvmf_target_discovery 00:10:27.666 ************************************ 00:10:27.666 10:29:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:27.666 * Looking for test storage... 00:10:27.666 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.666 10:29:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:32.946 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:32.946 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:32.946 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:32.947 Found net devices under 0000:27:00.0: cvl_0_0 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:32.947 Found net devices under 0000:27:00.1: cvl_0_1 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:10:32.947 00:10:32.947 --- 10.0.0.2 ping statistics --- 00:10:32.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.947 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:10:32.947 00:10:32.947 --- 10.0.0.1 ping statistics --- 00:10:32.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.947 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2553267 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2553267 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 2553267 ']' 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.947 10:29:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.947 [2024-05-15 10:29:48.615484] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:10:32.947 [2024-05-15 10:29:48.615584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.947 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.947 [2024-05-15 10:29:48.734840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.207 [2024-05-15 10:29:48.830153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.207 [2024-05-15 10:29:48.830191] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.207 [2024-05-15 10:29:48.830201] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.207 [2024-05-15 10:29:48.830210] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.207 [2024-05-15 10:29:48.830217] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.207 [2024-05-15 10:29:48.830297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.207 [2024-05-15 10:29:48.830391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.207 [2024-05-15 10:29:48.830491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.207 [2024-05-15 10:29:48.830501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.468 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:33.468 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:10:33.468 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.468 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:33.468 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 [2024-05-15 10:29:49.369879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 Null1 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 [2024-05-15 10:29:49.421868] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:33.729 [2024-05-15 10:29:49.422162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 Null2 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 Null3 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.729 Null4 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.729 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.730 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 4420 00:10:33.989 00:10:33.989 Discovery Log Number of Records 6, Generation counter 6 00:10:33.989 =====Discovery Log Entry 0====== 00:10:33.989 trtype: tcp 00:10:33.989 adrfam: ipv4 00:10:33.989 subtype: current discovery subsystem 00:10:33.989 treq: not required 00:10:33.989 portid: 0 00:10:33.989 trsvcid: 4420 00:10:33.989 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:33.989 traddr: 10.0.0.2 00:10:33.989 eflags: explicit discovery connections, duplicate discovery information 00:10:33.989 sectype: none 00:10:33.989 =====Discovery Log Entry 1====== 00:10:33.989 trtype: tcp 00:10:33.989 adrfam: ipv4 00:10:33.989 subtype: nvme subsystem 00:10:33.989 treq: not required 00:10:33.989 portid: 0 00:10:33.989 trsvcid: 4420 00:10:33.989 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:33.989 traddr: 10.0.0.2 00:10:33.989 eflags: none 00:10:33.989 sectype: none 00:10:33.989 =====Discovery Log Entry 2====== 00:10:33.989 trtype: tcp 00:10:33.989 adrfam: ipv4 00:10:33.989 subtype: nvme subsystem 00:10:33.989 treq: not required 00:10:33.989 portid: 0 00:10:33.989 trsvcid: 4420 00:10:33.989 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:33.989 traddr: 10.0.0.2 00:10:33.989 eflags: none 00:10:33.989 sectype: none 00:10:33.989 =====Discovery Log Entry 3====== 00:10:33.989 trtype: tcp 00:10:33.989 adrfam: ipv4 00:10:33.989 subtype: nvme subsystem 00:10:33.989 treq: not required 00:10:33.989 portid: 0 00:10:33.989 trsvcid: 4420 00:10:33.989 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:33.989 traddr: 10.0.0.2 00:10:33.989 eflags: none 00:10:33.989 sectype: none 00:10:33.989 =====Discovery Log Entry 4====== 00:10:33.989 trtype: tcp 00:10:33.989 adrfam: ipv4 00:10:33.989 subtype: nvme subsystem 00:10:33.989 treq: not required 00:10:33.989 portid: 0 00:10:33.989 trsvcid: 4420 00:10:33.989 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:33.989 traddr: 10.0.0.2 00:10:33.989 eflags: none 00:10:33.989 sectype: none 00:10:33.989 =====Discovery Log Entry 5====== 00:10:33.989 trtype: tcp 00:10:33.989 adrfam: ipv4 00:10:33.989 subtype: discovery subsystem referral 00:10:33.989 treq: not required 00:10:33.989 portid: 0 00:10:33.989 trsvcid: 4430 00:10:33.989 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:33.989 traddr: 10.0.0.2 00:10:33.989 eflags: none 00:10:33.989 sectype: none 00:10:33.989 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:33.989 Perform nvmf subsystem discovery via RPC 00:10:33.989 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:33.989 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.989 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.989 [ 00:10:33.989 { 00:10:33.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:33.989 "subtype": "Discovery", 00:10:33.989 "listen_addresses": [ 00:10:33.989 { 00:10:33.989 "trtype": "TCP", 00:10:33.989 "adrfam": "IPv4", 00:10:33.989 "traddr": "10.0.0.2", 00:10:33.989 "trsvcid": "4420" 00:10:33.989 } 00:10:33.989 ], 00:10:33.989 "allow_any_host": true, 00:10:33.989 "hosts": [] 00:10:33.989 }, 00:10:33.989 { 00:10:33.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.989 "subtype": "NVMe", 00:10:33.989 "listen_addresses": [ 00:10:33.989 { 00:10:33.989 "trtype": "TCP", 00:10:33.989 "adrfam": "IPv4", 00:10:33.989 "traddr": "10.0.0.2", 00:10:33.989 "trsvcid": "4420" 00:10:33.989 } 00:10:33.989 ], 00:10:33.990 "allow_any_host": true, 00:10:33.990 "hosts": [], 00:10:33.990 "serial_number": "SPDK00000000000001", 00:10:33.990 "model_number": "SPDK bdev Controller", 00:10:33.990 "max_namespaces": 32, 00:10:33.990 "min_cntlid": 1, 00:10:33.990 "max_cntlid": 65519, 00:10:33.990 "namespaces": [ 00:10:33.990 { 00:10:33.990 "nsid": 1, 00:10:33.990 "bdev_name": "Null1", 00:10:33.990 "name": "Null1", 00:10:33.990 "nguid": "5436BF6794CF45C8B79E60311F1E51F8", 00:10:33.990 "uuid": "5436bf67-94cf-45c8-b79e-60311f1e51f8" 00:10:33.990 } 00:10:33.990 ] 00:10:33.990 }, 00:10:33.990 { 00:10:33.990 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:33.990 "subtype": "NVMe", 00:10:33.990 "listen_addresses": [ 00:10:33.990 { 00:10:33.990 "trtype": "TCP", 00:10:33.990 "adrfam": "IPv4", 00:10:33.990 "traddr": "10.0.0.2", 00:10:33.990 "trsvcid": "4420" 00:10:33.990 } 00:10:33.990 ], 00:10:33.990 "allow_any_host": true, 00:10:33.990 "hosts": [], 00:10:33.990 "serial_number": "SPDK00000000000002", 00:10:33.990 "model_number": "SPDK bdev Controller", 00:10:33.990 "max_namespaces": 32, 00:10:33.990 "min_cntlid": 1, 00:10:33.990 "max_cntlid": 65519, 00:10:33.990 "namespaces": [ 00:10:33.990 { 00:10:33.990 "nsid": 1, 00:10:33.990 "bdev_name": "Null2", 00:10:33.990 "name": "Null2", 00:10:33.990 "nguid": "45C4D28DC80A4BF985000FFF2D9AE2C1", 00:10:33.990 "uuid": "45c4d28d-c80a-4bf9-8500-0fff2d9ae2c1" 00:10:33.990 } 00:10:33.990 ] 00:10:33.990 }, 00:10:33.990 { 00:10:33.990 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:33.990 "subtype": "NVMe", 00:10:33.990 "listen_addresses": [ 00:10:33.990 { 00:10:33.990 "trtype": "TCP", 00:10:33.990 "adrfam": "IPv4", 00:10:33.990 "traddr": "10.0.0.2", 00:10:33.990 "trsvcid": "4420" 00:10:33.990 } 00:10:33.990 ], 00:10:33.990 "allow_any_host": true, 00:10:33.990 "hosts": [], 00:10:33.990 "serial_number": "SPDK00000000000003", 00:10:33.990 "model_number": "SPDK bdev Controller", 00:10:33.990 "max_namespaces": 32, 00:10:33.990 "min_cntlid": 1, 00:10:33.990 "max_cntlid": 65519, 00:10:33.990 "namespaces": [ 00:10:33.990 { 00:10:33.990 "nsid": 1, 00:10:33.990 "bdev_name": "Null3", 00:10:33.990 "name": "Null3", 00:10:33.990 "nguid": "FB640B7F1F5E4070999B74B6DB669D1F", 00:10:33.990 "uuid": "fb640b7f-1f5e-4070-999b-74b6db669d1f" 00:10:33.990 } 00:10:33.990 ] 00:10:33.990 }, 00:10:33.990 { 00:10:33.990 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:33.990 "subtype": "NVMe", 00:10:33.990 "listen_addresses": [ 00:10:33.990 { 00:10:33.990 "trtype": "TCP", 00:10:33.990 "adrfam": "IPv4", 00:10:33.990 "traddr": "10.0.0.2", 00:10:33.990 "trsvcid": "4420" 00:10:33.990 } 00:10:33.990 ], 00:10:33.990 "allow_any_host": true, 00:10:33.990 "hosts": [], 00:10:33.990 "serial_number": "SPDK00000000000004", 00:10:33.990 "model_number": "SPDK bdev Controller", 00:10:33.990 "max_namespaces": 32, 00:10:33.990 "min_cntlid": 1, 00:10:33.990 "max_cntlid": 65519, 00:10:33.990 "namespaces": [ 00:10:33.990 { 00:10:33.990 "nsid": 1, 00:10:33.990 "bdev_name": "Null4", 00:10:33.990 "name": "Null4", 00:10:33.990 "nguid": "5019B50F111048579C870F5502DF5097", 00:10:33.990 "uuid": "5019b50f-1110-4857-9c87-0f5502df5097" 00:10:33.990 } 00:10:33.990 ] 00:10:33.990 } 00:10:33.990 ] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.990 rmmod nvme_tcp 00:10:33.990 rmmod nvme_fabrics 00:10:33.990 rmmod nvme_keyring 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2553267 ']' 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2553267 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 2553267 ']' 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 2553267 00:10:33.990 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2553267 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2553267' 00:10:34.249 killing process with pid 2553267 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 2553267 00:10:34.249 [2024-05-15 10:29:49.904977] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:34.249 10:29:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 2553267 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.508 10:29:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.044 10:29:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:37.044 00:10:37.044 real 0m9.433s 00:10:37.044 user 0m7.318s 00:10:37.044 sys 0m4.367s 00:10:37.044 10:29:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:37.044 10:29:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 ************************************ 00:10:37.044 END TEST nvmf_target_discovery 00:10:37.044 ************************************ 00:10:37.044 10:29:52 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:37.044 10:29:52 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:37.044 10:29:52 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:37.044 10:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 ************************************ 00:10:37.044 START TEST nvmf_referrals 00:10:37.044 ************************************ 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:37.044 * Looking for test storage... 00:10:37.044 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.044 10:29:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:42.323 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:42.323 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:42.323 Found net devices under 0000:27:00.0: cvl_0_0 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:42.323 Found net devices under 0000:27:00.1: cvl_0_1 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.323 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.324 10:29:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:10:42.324 00:10:42.324 --- 10.0.0.2 ping statistics --- 00:10:42.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.324 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:10:42.324 00:10:42.324 --- 10.0.0.1 ping statistics --- 00:10:42.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.324 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2557477 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2557477 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 2557477 ']' 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.324 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.324 [2024-05-15 10:29:58.172307] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:10:42.324 [2024-05-15 10:29:58.172413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.586 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.586 [2024-05-15 10:29:58.294713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.586 [2024-05-15 10:29:58.390329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.586 [2024-05-15 10:29:58.390368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.586 [2024-05-15 10:29:58.390378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.586 [2024-05-15 10:29:58.390387] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.586 [2024-05-15 10:29:58.390394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.586 [2024-05-15 10:29:58.390476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.586 [2024-05-15 10:29:58.390580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.586 [2024-05-15 10:29:58.390679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.586 [2024-05-15 10:29:58.390691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 [2024-05-15 10:29:58.921434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 [2024-05-15 10:29:58.937398] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:43.154 [2024-05-15 10:29:58.937664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.154 10:29:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:43.154 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.414 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.675 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.935 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.936 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.194 10:29:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:44.194 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:44.194 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:44.194 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:44.194 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:44.194 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.452 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.711 rmmod nvme_tcp 00:10:44.711 rmmod nvme_fabrics 00:10:44.711 rmmod nvme_keyring 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2557477 ']' 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2557477 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 2557477 ']' 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 2557477 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2557477 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2557477' 00:10:44.711 killing process with pid 2557477 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 2557477 00:10:44.711 [2024-05-15 10:30:00.421971] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:44.711 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 2557477 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.280 10:30:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.184 10:30:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:47.184 00:10:47.184 real 0m10.428s 00:10:47.184 user 0m11.531s 00:10:47.184 sys 0m4.727s 00:10:47.184 10:30:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:47.184 10:30:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 ************************************ 00:10:47.184 END TEST nvmf_referrals 00:10:47.184 ************************************ 00:10:47.184 10:30:02 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:47.184 10:30:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:47.184 10:30:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:47.184 10:30:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 ************************************ 00:10:47.184 START TEST nvmf_connect_disconnect 00:10:47.184 ************************************ 00:10:47.184 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:47.462 * Looking for test storage... 00:10:47.462 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:47.462 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.463 10:30:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:54.044 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:54.044 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:54.044 Found net devices under 0000:27:00.0: cvl_0_0 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:54.044 Found net devices under 0000:27:00.1: cvl_0_1 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.044 10:30:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.044 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:54.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:10:54.044 00:10:54.044 --- 10.0.0.2 ping statistics --- 00:10:54.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.045 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:10:54.045 00:10:54.045 --- 10.0.0.1 ping statistics --- 00:10:54.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.045 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2562641 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2562641 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 2562641 ']' 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.045 10:30:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.045 [2024-05-15 10:30:09.427988] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:10:54.045 [2024-05-15 10:30:09.428126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.045 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.045 [2024-05-15 10:30:09.569239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.045 [2024-05-15 10:30:09.670083] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.045 [2024-05-15 10:30:09.670136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.045 [2024-05-15 10:30:09.670147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.045 [2024-05-15 10:30:09.670157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.045 [2024-05-15 10:30:09.670165] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.045 [2024-05-15 10:30:09.670230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.045 [2024-05-15 10:30:09.670339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.045 [2024-05-15 10:30:09.670441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.045 [2024-05-15 10:30:09.670451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.304 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:54.304 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:10:54.304 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.304 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:54.304 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.563 [2024-05-15 10:30:10.191870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.563 [2024-05-15 10:30:10.264239] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:54.563 [2024-05-15 10:30:10.264588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:54.563 10:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:58.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.823 rmmod nvme_tcp 00:11:12.823 rmmod nvme_fabrics 00:11:12.823 rmmod nvme_keyring 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2562641 ']' 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2562641 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2562641 ']' 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 2562641 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2562641 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2562641' 00:11:12.823 killing process with pid 2562641 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 2562641 00:11:12.823 [2024-05-15 10:30:28.183703] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:12.823 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 2562641 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.083 10:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.034 10:30:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:15.034 00:11:15.034 real 0m27.776s 00:11:15.034 user 1m16.972s 00:11:15.034 sys 0m5.656s 00:11:15.034 10:30:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:15.034 10:30:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.034 ************************************ 00:11:15.034 END TEST nvmf_connect_disconnect 00:11:15.034 ************************************ 00:11:15.034 10:30:30 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:15.034 10:30:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:15.034 10:30:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:15.034 10:30:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.034 ************************************ 00:11:15.034 START TEST nvmf_multitarget 00:11:15.034 ************************************ 00:11:15.034 10:30:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:15.294 * Looking for test storage... 00:11:15.294 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.294 10:30:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.295 10:30:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.575 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:20.576 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:20.576 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:20.576 Found net devices under 0000:27:00.0: cvl_0_0 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:20.576 Found net devices under 0000:27:00.1: cvl_0_1 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.576 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:20.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:11:20.837 00:11:20.837 --- 10.0.0.2 ping statistics --- 00:11:20.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.837 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:20.837 00:11:20.837 --- 10.0.0.1 ping statistics --- 00:11:20.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.837 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:20.837 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2570438 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2570438 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 2570438 ']' 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:20.838 10:30:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.838 [2024-05-15 10:30:36.685206] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:11:20.838 [2024-05-15 10:30:36.685322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.098 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.098 [2024-05-15 10:30:36.812990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.098 [2024-05-15 10:30:36.908969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.098 [2024-05-15 10:30:36.909009] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.098 [2024-05-15 10:30:36.909019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.098 [2024-05-15 10:30:36.909027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.098 [2024-05-15 10:30:36.909038] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.098 [2024-05-15 10:30:36.909127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.098 [2024-05-15 10:30:36.909223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.098 [2024-05-15 10:30:36.909323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.099 [2024-05-15 10:30:36.909334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:21.669 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:21.927 "nvmf_tgt_1" 00:11:21.927 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:21.927 "nvmf_tgt_2" 00:11:21.927 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:21.927 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:21.927 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:21.927 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:22.188 true 00:11:22.188 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:22.188 true 00:11:22.188 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:22.188 10:30:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.188 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.449 rmmod nvme_tcp 00:11:22.449 rmmod nvme_fabrics 00:11:22.449 rmmod nvme_keyring 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2570438 ']' 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2570438 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 2570438 ']' 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 2570438 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2570438 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2570438' 00:11:22.449 killing process with pid 2570438 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 2570438 00:11:22.449 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 2570438 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.020 10:30:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.932 10:30:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:24.932 00:11:24.932 real 0m9.826s 00:11:24.932 user 0m8.594s 00:11:24.932 sys 0m4.741s 00:11:24.932 10:30:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:24.932 10:30:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:24.932 ************************************ 00:11:24.932 END TEST nvmf_multitarget 00:11:24.932 ************************************ 00:11:24.932 10:30:40 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:24.932 10:30:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:24.932 10:30:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:24.932 10:30:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:24.932 ************************************ 00:11:24.932 START TEST nvmf_rpc 00:11:24.933 ************************************ 00:11:24.933 10:30:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:25.193 * Looking for test storage... 00:11:25.193 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.193 10:30:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:30.479 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:30.480 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:30.480 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:30.480 Found net devices under 0000:27:00.0: cvl_0_0 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:30.480 Found net devices under 0000:27:00.1: cvl_0_1 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.480 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:30.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:11:30.739 00:11:30.739 --- 10.0.0.2 ping statistics --- 00:11:30.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.739 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:11:30.739 00:11:30.739 --- 10.0.0.1 ping statistics --- 00:11:30.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.739 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:30.739 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2574678 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2574678 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 2574678 ']' 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.740 10:30:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.740 [2024-05-15 10:30:46.599065] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:11:30.740 [2024-05-15 10:30:46.599172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.999 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.999 [2024-05-15 10:30:46.723886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.999 [2024-05-15 10:30:46.822033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.999 [2024-05-15 10:30:46.822080] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.999 [2024-05-15 10:30:46.822090] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.999 [2024-05-15 10:30:46.822099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.999 [2024-05-15 10:30:46.822107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.999 [2024-05-15 10:30:46.822174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.999 [2024-05-15 10:30:46.822283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.999 [2024-05-15 10:30:46.822384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.999 [2024-05-15 10:30:46.822395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:31.569 "tick_rate": 1900000000, 00:11:31.569 "poll_groups": [ 00:11:31.569 { 00:11:31.569 "name": "nvmf_tgt_poll_group_000", 00:11:31.569 "admin_qpairs": 0, 00:11:31.569 "io_qpairs": 0, 00:11:31.569 "current_admin_qpairs": 0, 00:11:31.569 "current_io_qpairs": 0, 00:11:31.569 "pending_bdev_io": 0, 00:11:31.569 "completed_nvme_io": 0, 00:11:31.569 "transports": [] 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "name": "nvmf_tgt_poll_group_001", 00:11:31.569 "admin_qpairs": 0, 00:11:31.569 "io_qpairs": 0, 00:11:31.569 "current_admin_qpairs": 0, 00:11:31.569 "current_io_qpairs": 0, 00:11:31.569 "pending_bdev_io": 0, 00:11:31.569 "completed_nvme_io": 0, 00:11:31.569 "transports": [] 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "name": "nvmf_tgt_poll_group_002", 00:11:31.569 "admin_qpairs": 0, 00:11:31.569 "io_qpairs": 0, 00:11:31.569 "current_admin_qpairs": 0, 00:11:31.569 "current_io_qpairs": 0, 00:11:31.569 "pending_bdev_io": 0, 00:11:31.569 "completed_nvme_io": 0, 00:11:31.569 "transports": [] 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "name": "nvmf_tgt_poll_group_003", 00:11:31.569 "admin_qpairs": 0, 00:11:31.569 "io_qpairs": 0, 00:11:31.569 "current_admin_qpairs": 0, 00:11:31.569 "current_io_qpairs": 0, 00:11:31.569 "pending_bdev_io": 0, 00:11:31.569 "completed_nvme_io": 0, 00:11:31.569 "transports": [] 00:11:31.569 } 00:11:31.569 ] 00:11:31.569 }' 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.569 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 [2024-05-15 10:30:47.444156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:31.837 "tick_rate": 1900000000, 00:11:31.837 "poll_groups": [ 00:11:31.837 { 00:11:31.837 "name": "nvmf_tgt_poll_group_000", 00:11:31.837 "admin_qpairs": 0, 00:11:31.837 "io_qpairs": 0, 00:11:31.837 "current_admin_qpairs": 0, 00:11:31.837 "current_io_qpairs": 0, 00:11:31.837 "pending_bdev_io": 0, 00:11:31.837 "completed_nvme_io": 0, 00:11:31.837 "transports": [ 00:11:31.837 { 00:11:31.837 "trtype": "TCP" 00:11:31.837 } 00:11:31.837 ] 00:11:31.837 }, 00:11:31.837 { 00:11:31.837 "name": "nvmf_tgt_poll_group_001", 00:11:31.837 "admin_qpairs": 0, 00:11:31.837 "io_qpairs": 0, 00:11:31.837 "current_admin_qpairs": 0, 00:11:31.837 "current_io_qpairs": 0, 00:11:31.837 "pending_bdev_io": 0, 00:11:31.837 "completed_nvme_io": 0, 00:11:31.837 "transports": [ 00:11:31.837 { 00:11:31.837 "trtype": "TCP" 00:11:31.837 } 00:11:31.837 ] 00:11:31.837 }, 00:11:31.837 { 00:11:31.837 "name": "nvmf_tgt_poll_group_002", 00:11:31.837 "admin_qpairs": 0, 00:11:31.837 "io_qpairs": 0, 00:11:31.837 "current_admin_qpairs": 0, 00:11:31.837 "current_io_qpairs": 0, 00:11:31.837 "pending_bdev_io": 0, 00:11:31.837 "completed_nvme_io": 0, 00:11:31.837 "transports": [ 00:11:31.837 { 00:11:31.837 "trtype": "TCP" 00:11:31.837 } 00:11:31.837 ] 00:11:31.837 }, 00:11:31.837 { 00:11:31.837 "name": "nvmf_tgt_poll_group_003", 00:11:31.837 "admin_qpairs": 0, 00:11:31.837 "io_qpairs": 0, 00:11:31.837 "current_admin_qpairs": 0, 00:11:31.837 "current_io_qpairs": 0, 00:11:31.837 "pending_bdev_io": 0, 00:11:31.837 "completed_nvme_io": 0, 00:11:31.837 "transports": [ 00:11:31.837 { 00:11:31.837 "trtype": "TCP" 00:11:31.837 } 00:11:31.837 ] 00:11:31.837 } 00:11:31.837 ] 00:11:31.837 }' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 Malloc1 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 [2024-05-15 10:30:47.612387] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:31.837 [2024-05-15 10:30:47.612696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:11:31.837 [2024-05-15 10:30:47.641692] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:11:31.837 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:31.837 could not add new controller: failed to write to nvme-fabrics device 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.837 10:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.763 10:30:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.763 10:30:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:33.763 10:30:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.763 10:30:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:33.763 10:30:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.719 [2024-05-15 10:30:51.369373] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:11:35.719 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:35.719 could not add new controller: failed to write to nvme-fabrics device 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.719 10:30:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.101 10:30:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.101 10:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:37.101 10:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.101 10:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:37.101 10:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:39.006 10:30:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.265 [2024-05-15 10:30:55.080400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.265 10:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.644 10:30:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.644 10:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:40.644 10:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.644 10:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:40.644 10:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.184 [2024-05-15 10:30:58.758614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.184 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.185 10:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.185 10:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.561 10:31:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.561 10:31:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:44.561 10:31:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.561 10:31:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:44.561 10:31:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:46.475 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.735 [2024-05-15 10:31:02.443153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.735 10:31:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.113 10:31:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.113 10:31:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:48.113 10:31:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.113 10:31:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:48.113 10:31:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:50.019 10:31:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.278 [2024-05-15 10:31:06.128685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.278 10:31:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.184 10:31:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.184 10:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:52.184 10:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.184 10:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:52.184 10:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.092 [2024-05-15 10:31:09.812467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.092 10:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.524 10:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.524 10:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:55.524 10:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.524 10:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:55.524 10:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:57.433 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 [2024-05-15 10:31:13.493012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 [2024-05-15 10:31:13.541002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.692 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 [2024-05-15 10:31:13.589034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 [2024-05-15 10:31:13.637081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 [2024-05-15 10:31:13.685154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:57.952 "tick_rate": 1900000000, 00:11:57.952 "poll_groups": [ 00:11:57.952 { 00:11:57.952 "name": "nvmf_tgt_poll_group_000", 00:11:57.952 "admin_qpairs": 0, 00:11:57.952 "io_qpairs": 224, 00:11:57.952 "current_admin_qpairs": 0, 00:11:57.952 "current_io_qpairs": 0, 00:11:57.952 "pending_bdev_io": 0, 00:11:57.952 "completed_nvme_io": 226, 00:11:57.952 "transports": [ 00:11:57.952 { 00:11:57.952 "trtype": "TCP" 00:11:57.952 } 00:11:57.952 ] 00:11:57.952 }, 00:11:57.952 { 00:11:57.952 "name": "nvmf_tgt_poll_group_001", 00:11:57.952 "admin_qpairs": 1, 00:11:57.952 "io_qpairs": 223, 00:11:57.952 "current_admin_qpairs": 0, 00:11:57.952 "current_io_qpairs": 0, 00:11:57.952 "pending_bdev_io": 0, 00:11:57.952 "completed_nvme_io": 226, 00:11:57.952 "transports": [ 00:11:57.952 { 00:11:57.952 "trtype": "TCP" 00:11:57.952 } 00:11:57.952 ] 00:11:57.952 }, 00:11:57.952 { 00:11:57.952 "name": "nvmf_tgt_poll_group_002", 00:11:57.952 "admin_qpairs": 6, 00:11:57.952 "io_qpairs": 218, 00:11:57.952 "current_admin_qpairs": 0, 00:11:57.952 "current_io_qpairs": 0, 00:11:57.952 "pending_bdev_io": 0, 00:11:57.952 "completed_nvme_io": 318, 00:11:57.952 "transports": [ 00:11:57.952 { 00:11:57.952 "trtype": "TCP" 00:11:57.952 } 00:11:57.952 ] 00:11:57.952 }, 00:11:57.952 { 00:11:57.952 "name": "nvmf_tgt_poll_group_003", 00:11:57.952 "admin_qpairs": 0, 00:11:57.952 "io_qpairs": 224, 00:11:57.952 "current_admin_qpairs": 0, 00:11:57.952 "current_io_qpairs": 0, 00:11:57.952 "pending_bdev_io": 0, 00:11:57.952 "completed_nvme_io": 469, 00:11:57.952 "transports": [ 00:11:57.952 { 00:11:57.952 "trtype": "TCP" 00:11:57.952 } 00:11:57.952 ] 00:11:57.952 } 00:11:57.952 ] 00:11:57.952 }' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:57.952 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.953 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:57.953 rmmod nvme_tcp 00:11:58.211 rmmod nvme_fabrics 00:11:58.211 rmmod nvme_keyring 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2574678 ']' 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2574678 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 2574678 ']' 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 2574678 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2574678 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2574678' 00:11:58.211 killing process with pid 2574678 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 2574678 00:11:58.211 [2024-05-15 10:31:13.913410] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:58.211 10:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 2574678 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.780 10:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.687 10:31:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:00.687 00:12:00.687 real 0m35.757s 00:12:00.687 user 1m51.054s 00:12:00.687 sys 0m5.706s 00:12:00.687 10:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:00.687 10:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.687 ************************************ 00:12:00.687 END TEST nvmf_rpc 00:12:00.687 ************************************ 00:12:00.687 10:31:16 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:00.687 10:31:16 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:00.687 10:31:16 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:00.687 10:31:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.947 ************************************ 00:12:00.947 START TEST nvmf_invalid 00:12:00.947 ************************************ 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:00.947 * Looking for test storage... 00:12:00.947 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.947 10:31:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:00.948 10:31:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:07.528 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:07.528 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:07.528 Found net devices under 0000:27:00.0: cvl_0_0 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:07.528 Found net devices under 0000:27:00.1: cvl_0_1 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.528 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.529 10:31:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:12:07.529 00:12:07.529 --- 10.0.0.2 ping statistics --- 00:12:07.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.529 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:12:07.529 00:12:07.529 --- 10.0.0.1 ping statistics --- 00:12:07.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.529 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2584297 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2584297 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 2584297 ']' 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.529 10:31:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.529 [2024-05-15 10:31:23.360714] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:12:07.529 [2024-05-15 10:31:23.360842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.790 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.790 [2024-05-15 10:31:23.500314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.790 [2024-05-15 10:31:23.603890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.790 [2024-05-15 10:31:23.603942] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.790 [2024-05-15 10:31:23.603953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.790 [2024-05-15 10:31:23.603963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.790 [2024-05-15 10:31:23.603972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.790 [2024-05-15 10:31:23.604039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.790 [2024-05-15 10:31:23.604153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.790 [2024-05-15 10:31:23.604253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.790 [2024-05-15 10:31:23.604264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.359 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17565 00:12:08.618 [2024-05-15 10:31:24.254569] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:08.618 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:08.618 { 00:12:08.618 "nqn": "nqn.2016-06.io.spdk:cnode17565", 00:12:08.618 "tgt_name": "foobar", 00:12:08.618 "method": "nvmf_create_subsystem", 00:12:08.618 "req_id": 1 00:12:08.618 } 00:12:08.618 Got JSON-RPC error response 00:12:08.618 response: 00:12:08.618 { 00:12:08.618 "code": -32603, 00:12:08.618 "message": "Unable to find target foobar" 00:12:08.618 }' 00:12:08.618 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:08.618 { 00:12:08.618 "nqn": "nqn.2016-06.io.spdk:cnode17565", 00:12:08.618 "tgt_name": "foobar", 00:12:08.618 "method": "nvmf_create_subsystem", 00:12:08.618 "req_id": 1 00:12:08.618 } 00:12:08.618 Got JSON-RPC error response 00:12:08.618 response: 00:12:08.618 { 00:12:08.618 "code": -32603, 00:12:08.619 "message": "Unable to find target foobar" 00:12:08.619 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:08.619 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:08.619 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7159 00:12:08.619 [2024-05-15 10:31:24.422801] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7159: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:08.619 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:08.619 { 00:12:08.619 "nqn": "nqn.2016-06.io.spdk:cnode7159", 00:12:08.619 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:08.619 "method": "nvmf_create_subsystem", 00:12:08.619 "req_id": 1 00:12:08.619 } 00:12:08.619 Got JSON-RPC error response 00:12:08.619 response: 00:12:08.619 { 00:12:08.619 "code": -32602, 00:12:08.619 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:08.619 }' 00:12:08.619 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:08.619 { 00:12:08.619 "nqn": "nqn.2016-06.io.spdk:cnode7159", 00:12:08.619 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:08.619 "method": "nvmf_create_subsystem", 00:12:08.619 "req_id": 1 00:12:08.619 } 00:12:08.619 Got JSON-RPC error response 00:12:08.619 response: 00:12:08.619 { 00:12:08.619 "code": -32602, 00:12:08.619 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:08.619 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:08.619 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:08.619 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14156 00:12:08.878 [2024-05-15 10:31:24.586950] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14156: invalid model number 'SPDK_Controller' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:08.878 { 00:12:08.878 "nqn": "nqn.2016-06.io.spdk:cnode14156", 00:12:08.878 "model_number": "SPDK_Controller\u001f", 00:12:08.878 "method": "nvmf_create_subsystem", 00:12:08.878 "req_id": 1 00:12:08.878 } 00:12:08.878 Got JSON-RPC error response 00:12:08.878 response: 00:12:08.878 { 00:12:08.878 "code": -32602, 00:12:08.878 "message": "Invalid MN SPDK_Controller\u001f" 00:12:08.878 }' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:08.878 { 00:12:08.878 "nqn": "nqn.2016-06.io.spdk:cnode14156", 00:12:08.878 "model_number": "SPDK_Controller\u001f", 00:12:08.878 "method": "nvmf_create_subsystem", 00:12:08.878 "req_id": 1 00:12:08.878 } 00:12:08.878 Got JSON-RPC error response 00:12:08.878 response: 00:12:08.878 { 00:12:08.878 "code": -32602, 00:12:08.878 "message": "Invalid MN SPDK_Controller\u001f" 00:12:08.878 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ':=uBGrASGa0Vdo/?&Vd"' 00:12:08.878 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ':=uBGrASGa0Vdo/?&Vd"' nqn.2016-06.io.spdk:cnode11819 00:12:09.137 [2024-05-15 10:31:24.879329] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11819: invalid serial number ':=uBGrASGa0Vdo/?&Vd"' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:09.137 { 00:12:09.137 "nqn": "nqn.2016-06.io.spdk:cnode11819", 00:12:09.137 "serial_number": "\u007f:=uBGrASGa0Vdo/?&Vd\"", 00:12:09.137 "method": "nvmf_create_subsystem", 00:12:09.137 "req_id": 1 00:12:09.137 } 00:12:09.137 Got JSON-RPC error response 00:12:09.137 response: 00:12:09.137 { 00:12:09.137 "code": -32602, 00:12:09.137 "message": "Invalid SN \u007f:=uBGrASGa0Vdo/?&Vd\"" 00:12:09.137 }' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:09.137 { 00:12:09.137 "nqn": "nqn.2016-06.io.spdk:cnode11819", 00:12:09.137 "serial_number": "\u007f:=uBGrASGa0Vdo/?&Vd\"", 00:12:09.137 "method": "nvmf_create_subsystem", 00:12:09.137 "req_id": 1 00:12:09.137 } 00:12:09.137 Got JSON-RPC error response 00:12:09.137 response: 00:12:09.137 { 00:12:09.137 "code": -32602, 00:12:09.137 "message": "Invalid SN \u007f:=uBGrASGa0Vdo/?&Vd\"" 00:12:09.137 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.137 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:09.397 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fSPzWA {ThI'\''l-mkHQt7[<$!m41`|0yJU1M(yVBsy' 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'fSPzWA {ThI'\''l-mkHQt7[<$!m41`|0yJU1M(yVBsy' nqn.2016-06.io.spdk:cnode30834 00:12:09.398 [2024-05-15 10:31:25.243719] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30834: invalid model number 'fSPzWA {ThI'l-mkHQt7[<$!m41`|0yJU1M(yVBsy' 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:09.398 { 00:12:09.398 "nqn": "nqn.2016-06.io.spdk:cnode30834", 00:12:09.398 "model_number": "fSPzWA {ThI'\''l-mkHQt7[<$!m41`|0yJU1M(yVBsy", 00:12:09.398 "method": "nvmf_create_subsystem", 00:12:09.398 "req_id": 1 00:12:09.398 } 00:12:09.398 Got JSON-RPC error response 00:12:09.398 response: 00:12:09.398 { 00:12:09.398 "code": -32602, 00:12:09.398 "message": "Invalid MN fSPzWA {ThI'\''l-mkHQt7[<$!m41`|0yJU1M(yVBsy" 00:12:09.398 }' 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:09.398 { 00:12:09.398 "nqn": "nqn.2016-06.io.spdk:cnode30834", 00:12:09.398 "model_number": "fSPzWA {ThI'l-mkHQt7[<$!m41`|0yJU1M(yVBsy", 00:12:09.398 "method": "nvmf_create_subsystem", 00:12:09.398 "req_id": 1 00:12:09.398 } 00:12:09.398 Got JSON-RPC error response 00:12:09.398 response: 00:12:09.398 { 00:12:09.398 "code": -32602, 00:12:09.398 "message": "Invalid MN fSPzWA {ThI'l-mkHQt7[<$!m41`|0yJU1M(yVBsy" 00:12:09.398 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:09.398 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:09.663 [2024-05-15 10:31:25.383963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.663 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:09.922 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:09.922 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:09.922 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:09.922 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:09.922 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:09.922 [2024-05-15 10:31:25.684297] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:09.922 [2024-05-15 10:31:25.684393] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:09.922 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:09.922 { 00:12:09.922 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.922 "listen_address": { 00:12:09.922 "trtype": "tcp", 00:12:09.922 "traddr": "", 00:12:09.922 "trsvcid": "4421" 00:12:09.922 }, 00:12:09.923 "method": "nvmf_subsystem_remove_listener", 00:12:09.923 "req_id": 1 00:12:09.923 } 00:12:09.923 Got JSON-RPC error response 00:12:09.923 response: 00:12:09.923 { 00:12:09.923 "code": -32602, 00:12:09.923 "message": "Invalid parameters" 00:12:09.923 }' 00:12:09.923 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:09.923 { 00:12:09.923 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.923 "listen_address": { 00:12:09.923 "trtype": "tcp", 00:12:09.923 "traddr": "", 00:12:09.923 "trsvcid": "4421" 00:12:09.923 }, 00:12:09.923 "method": "nvmf_subsystem_remove_listener", 00:12:09.923 "req_id": 1 00:12:09.923 } 00:12:09.923 Got JSON-RPC error response 00:12:09.923 response: 00:12:09.923 { 00:12:09.923 "code": -32602, 00:12:09.923 "message": "Invalid parameters" 00:12:09.923 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:09.923 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11434 -i 0 00:12:10.182 [2024-05-15 10:31:25.844535] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11434: invalid cntlid range [0-65519] 00:12:10.182 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:10.182 { 00:12:10.182 "nqn": "nqn.2016-06.io.spdk:cnode11434", 00:12:10.182 "min_cntlid": 0, 00:12:10.182 "method": "nvmf_create_subsystem", 00:12:10.182 "req_id": 1 00:12:10.182 } 00:12:10.182 Got JSON-RPC error response 00:12:10.182 response: 00:12:10.182 { 00:12:10.182 "code": -32602, 00:12:10.182 "message": "Invalid cntlid range [0-65519]" 00:12:10.182 }' 00:12:10.182 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:10.182 { 00:12:10.182 "nqn": "nqn.2016-06.io.spdk:cnode11434", 00:12:10.182 "min_cntlid": 0, 00:12:10.182 "method": "nvmf_create_subsystem", 00:12:10.182 "req_id": 1 00:12:10.182 } 00:12:10.182 Got JSON-RPC error response 00:12:10.182 response: 00:12:10.182 { 00:12:10.182 "code": -32602, 00:12:10.182 "message": "Invalid cntlid range [0-65519]" 00:12:10.182 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.182 10:31:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15475 -i 65520 00:12:10.182 [2024-05-15 10:31:26.004701] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15475: invalid cntlid range [65520-65519] 00:12:10.182 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:10.182 { 00:12:10.182 "nqn": "nqn.2016-06.io.spdk:cnode15475", 00:12:10.182 "min_cntlid": 65520, 00:12:10.182 "method": "nvmf_create_subsystem", 00:12:10.182 "req_id": 1 00:12:10.182 } 00:12:10.182 Got JSON-RPC error response 00:12:10.182 response: 00:12:10.182 { 00:12:10.182 "code": -32602, 00:12:10.182 "message": "Invalid cntlid range [65520-65519]" 00:12:10.182 }' 00:12:10.182 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:10.182 { 00:12:10.182 "nqn": "nqn.2016-06.io.spdk:cnode15475", 00:12:10.182 "min_cntlid": 65520, 00:12:10.182 "method": "nvmf_create_subsystem", 00:12:10.182 "req_id": 1 00:12:10.182 } 00:12:10.182 Got JSON-RPC error response 00:12:10.182 response: 00:12:10.182 { 00:12:10.182 "code": -32602, 00:12:10.182 "message": "Invalid cntlid range [65520-65519]" 00:12:10.182 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.183 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3464 -I 0 00:12:10.443 [2024-05-15 10:31:26.168907] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3464: invalid cntlid range [1-0] 00:12:10.443 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:10.443 { 00:12:10.443 "nqn": "nqn.2016-06.io.spdk:cnode3464", 00:12:10.443 "max_cntlid": 0, 00:12:10.443 "method": "nvmf_create_subsystem", 00:12:10.443 "req_id": 1 00:12:10.443 } 00:12:10.443 Got JSON-RPC error response 00:12:10.443 response: 00:12:10.443 { 00:12:10.443 "code": -32602, 00:12:10.443 "message": "Invalid cntlid range [1-0]" 00:12:10.443 }' 00:12:10.443 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:10.443 { 00:12:10.443 "nqn": "nqn.2016-06.io.spdk:cnode3464", 00:12:10.443 "max_cntlid": 0, 00:12:10.443 "method": "nvmf_create_subsystem", 00:12:10.443 "req_id": 1 00:12:10.443 } 00:12:10.443 Got JSON-RPC error response 00:12:10.443 response: 00:12:10.443 { 00:12:10.443 "code": -32602, 00:12:10.443 "message": "Invalid cntlid range [1-0]" 00:12:10.443 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.443 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11234 -I 65520 00:12:10.703 [2024-05-15 10:31:26.333124] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11234: invalid cntlid range [1-65520] 00:12:10.703 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:10.703 { 00:12:10.703 "nqn": "nqn.2016-06.io.spdk:cnode11234", 00:12:10.703 "max_cntlid": 65520, 00:12:10.703 "method": "nvmf_create_subsystem", 00:12:10.703 "req_id": 1 00:12:10.703 } 00:12:10.703 Got JSON-RPC error response 00:12:10.703 response: 00:12:10.703 { 00:12:10.703 "code": -32602, 00:12:10.703 "message": "Invalid cntlid range [1-65520]" 00:12:10.703 }' 00:12:10.703 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:10.703 { 00:12:10.703 "nqn": "nqn.2016-06.io.spdk:cnode11234", 00:12:10.703 "max_cntlid": 65520, 00:12:10.703 "method": "nvmf_create_subsystem", 00:12:10.703 "req_id": 1 00:12:10.703 } 00:12:10.703 Got JSON-RPC error response 00:12:10.703 response: 00:12:10.703 { 00:12:10.703 "code": -32602, 00:12:10.703 "message": "Invalid cntlid range [1-65520]" 00:12:10.703 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.703 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10828 -i 6 -I 5 00:12:10.703 [2024-05-15 10:31:26.493353] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10828: invalid cntlid range [6-5] 00:12:10.703 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:10.703 { 00:12:10.703 "nqn": "nqn.2016-06.io.spdk:cnode10828", 00:12:10.703 "min_cntlid": 6, 00:12:10.703 "max_cntlid": 5, 00:12:10.703 "method": "nvmf_create_subsystem", 00:12:10.703 "req_id": 1 00:12:10.703 } 00:12:10.703 Got JSON-RPC error response 00:12:10.703 response: 00:12:10.703 { 00:12:10.703 "code": -32602, 00:12:10.703 "message": "Invalid cntlid range [6-5]" 00:12:10.703 }' 00:12:10.703 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:10.703 { 00:12:10.703 "nqn": "nqn.2016-06.io.spdk:cnode10828", 00:12:10.703 "min_cntlid": 6, 00:12:10.703 "max_cntlid": 5, 00:12:10.703 "method": "nvmf_create_subsystem", 00:12:10.703 "req_id": 1 00:12:10.703 } 00:12:10.703 Got JSON-RPC error response 00:12:10.703 response: 00:12:10.703 { 00:12:10.703 "code": -32602, 00:12:10.703 "message": "Invalid cntlid range [6-5]" 00:12:10.703 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.703 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:10.962 { 00:12:10.962 "name": "foobar", 00:12:10.962 "method": "nvmf_delete_target", 00:12:10.962 "req_id": 1 00:12:10.962 } 00:12:10.962 Got JSON-RPC error response 00:12:10.962 response: 00:12:10.962 { 00:12:10.962 "code": -32602, 00:12:10.962 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:10.962 }' 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:10.962 { 00:12:10.962 "name": "foobar", 00:12:10.962 "method": "nvmf_delete_target", 00:12:10.962 "req_id": 1 00:12:10.962 } 00:12:10.962 Got JSON-RPC error response 00:12:10.962 response: 00:12:10.962 { 00:12:10.962 "code": -32602, 00:12:10.962 "message": "The specified target doesn't exist, cannot delete it." 00:12:10.962 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.962 rmmod nvme_tcp 00:12:10.962 rmmod nvme_fabrics 00:12:10.962 rmmod nvme_keyring 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2584297 ']' 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2584297 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 2584297 ']' 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 2584297 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2584297 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2584297' 00:12:10.962 killing process with pid 2584297 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 2584297 00:12:10.962 [2024-05-15 10:31:26.704933] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:10.962 10:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 2584297 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.531 10:31:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.438 10:31:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.438 00:12:13.438 real 0m12.638s 00:12:13.438 user 0m17.511s 00:12:13.438 sys 0m5.878s 00:12:13.438 10:31:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:13.438 10:31:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:13.438 ************************************ 00:12:13.438 END TEST nvmf_invalid 00:12:13.438 ************************************ 00:12:13.438 10:31:29 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:13.438 10:31:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:13.438 10:31:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:13.438 10:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.438 ************************************ 00:12:13.438 START TEST nvmf_abort 00:12:13.438 ************************************ 00:12:13.438 10:31:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:13.698 * Looking for test storage... 00:12:13.698 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.698 10:31:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:19.008 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.008 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:19.008 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:19.009 Found net devices under 0000:27:00.0: cvl_0_0 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:19.009 Found net devices under 0000:27:00.1: cvl_0_1 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:19.009 00:12:19.009 --- 10.0.0.2 ping statistics --- 00:12:19.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.009 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:12:19.009 00:12:19.009 --- 10.0.0.1 ping statistics --- 00:12:19.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.009 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2588951 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2588951 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 2588951 ']' 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.009 10:31:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:19.009 [2024-05-15 10:31:34.695695] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:12:19.009 [2024-05-15 10:31:34.695803] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.009 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.009 [2024-05-15 10:31:34.820685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.267 [2024-05-15 10:31:34.920018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.267 [2024-05-15 10:31:34.920067] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.267 [2024-05-15 10:31:34.920078] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.267 [2024-05-15 10:31:34.920088] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.267 [2024-05-15 10:31:34.920097] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.267 [2024-05-15 10:31:34.920246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.267 [2024-05-15 10:31:34.920275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.267 [2024-05-15 10:31:34.920286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.525 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:19.525 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:12:19.525 10:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.525 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:19.525 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.784 [2024-05-15 10:31:35.419314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.784 Malloc0 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.784 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.784 Delay0 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.785 [2024-05-15 10:31:35.511712] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:19.785 [2024-05-15 10:31:35.511980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.785 10:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:19.785 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.046 [2024-05-15 10:31:35.690403] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:22.582 Initializing NVMe Controllers 00:12:22.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:22.582 controller IO queue size 128 less than required 00:12:22.582 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:22.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:22.582 Initialization complete. Launching workers. 00:12:22.582 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 47755 00:12:22.582 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47820, failed to submit 62 00:12:22.582 success 47759, unsuccess 61, failed 0 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.582 rmmod nvme_tcp 00:12:22.582 rmmod nvme_fabrics 00:12:22.582 rmmod nvme_keyring 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2588951 ']' 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2588951 00:12:22.582 10:31:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 2588951 ']' 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 2588951 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2588951 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2588951' 00:12:22.582 killing process with pid 2588951 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 2588951 00:12:22.582 [2024-05-15 10:31:38.050689] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:22.582 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 2588951 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.842 10:31:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.750 10:31:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.750 00:12:24.750 real 0m11.334s 00:12:24.750 user 0m14.211s 00:12:24.750 sys 0m4.545s 00:12:25.011 10:31:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:25.011 10:31:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:25.011 ************************************ 00:12:25.011 END TEST nvmf_abort 00:12:25.011 ************************************ 00:12:25.011 10:31:40 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:25.011 10:31:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:25.011 10:31:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:25.011 10:31:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.011 ************************************ 00:12:25.011 START TEST nvmf_ns_hotplug_stress 00:12:25.011 ************************************ 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:25.011 * Looking for test storage... 00:12:25.011 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.011 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:30.286 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:30.286 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:30.286 Found net devices under 0000:27:00.0: cvl_0_0 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:30.286 Found net devices under 0000:27:00.1: cvl_0_1 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.286 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.287 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.287 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.287 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.287 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.287 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.287 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.287 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.287 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.287 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.287 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.287 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:12:30.545 00:12:30.545 --- 10.0.0.2 ping statistics --- 00:12:30.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.545 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:12:30.545 00:12:30.545 --- 10.0.0.1 ping statistics --- 00:12:30.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.545 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2593722 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2593722 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 2593722 ']' 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.545 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:30.545 [2024-05-15 10:31:46.279858] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:12:30.545 [2024-05-15 10:31:46.279957] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.545 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.545 [2024-05-15 10:31:46.397977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:30.805 [2024-05-15 10:31:46.497185] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.805 [2024-05-15 10:31:46.497221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.805 [2024-05-15 10:31:46.497230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.805 [2024-05-15 10:31:46.497241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.805 [2024-05-15 10:31:46.497249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.805 [2024-05-15 10:31:46.497393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.805 [2024-05-15 10:31:46.497425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.805 [2024-05-15 10:31:46.497434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.374 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:31.374 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:12:31.374 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.374 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:31.374 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.374 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.374 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:31.374 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:31.374 [2024-05-15 10:31:47.167966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.374 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:31.631 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.889 [2024-05-15 10:31:47.508808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:31.889 [2024-05-15 10:31:47.509161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.890 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.890 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:32.148 Malloc0 00:12:32.148 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:32.148 Delay0 00:12:32.148 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.409 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:32.409 NULL1 00:12:32.670 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:32.670 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2594056 00:12:32.670 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:32.670 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:32.670 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.670 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.928 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.928 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:32.928 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:33.186 [2024-05-15 10:31:48.926147] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:12:33.186 true 00:12:33.186 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:33.186 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.445 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.445 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:33.445 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:33.705 true 00:12:33.705 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:33.705 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.705 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.966 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:33.966 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:34.227 true 00:12:34.227 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:34.227 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.227 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.487 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:34.487 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:34.487 true 00:12:34.744 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:34.744 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.744 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.002 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:35.002 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:35.002 true 00:12:35.002 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:35.002 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.260 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.523 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:35.523 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:35.523 true 00:12:35.523 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:35.523 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.827 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.827 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:35.827 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:36.085 true 00:12:36.085 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:36.085 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.085 10:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.343 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:36.343 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:36.601 true 00:12:36.601 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:36.601 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.601 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.861 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:36.861 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:36.861 true 00:12:36.861 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:36.861 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.122 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.382 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:37.382 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:37.382 true 00:12:37.382 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:37.382 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.641 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.641 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:37.641 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:37.899 true 00:12:37.899 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:37.899 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.157 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.157 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:38.157 10:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:38.416 true 00:12:38.416 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:38.416 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.416 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.677 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:38.677 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:38.677 true 00:12:38.677 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:38.677 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.937 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.198 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:39.198 10:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:39.198 true 00:12:39.198 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:39.198 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.455 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.714 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:39.714 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:39.714 true 00:12:39.714 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:39.714 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.972 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.972 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:39.972 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:40.232 true 00:12:40.232 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:40.232 10:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.232 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.493 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:40.493 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:40.752 true 00:12:40.752 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:40.753 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.753 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.011 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:41.011 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:41.269 true 00:12:41.269 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:41.269 10:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.269 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.528 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:41.528 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:41.528 true 00:12:41.528 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:41.528 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.789 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.789 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:41.789 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:42.050 true 00:12:42.050 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:42.050 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.308 10:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.308 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:42.308 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:42.566 true 00:12:42.566 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:42.566 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.824 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.824 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:42.824 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:43.082 true 00:12:43.082 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:43.082 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.082 10:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.343 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:43.343 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:43.343 true 00:12:43.343 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:43.343 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.603 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.860 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:43.860 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:43.860 true 00:12:43.860 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:43.860 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.117 10:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.374 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:44.374 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:44.374 true 00:12:44.374 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:44.374 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.632 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.632 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:44.632 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:44.914 true 00:12:44.914 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:44.914 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.914 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.172 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:45.172 10:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:45.172 true 00:12:45.429 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:45.429 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.429 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.686 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:45.686 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:45.686 true 00:12:45.686 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:45.686 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.943 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.202 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:46.203 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:46.203 true 00:12:46.203 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:46.203 10:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.463 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.463 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:12:46.463 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:46.723 true 00:12:46.723 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:46.723 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.723 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.982 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:12:46.982 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:12:47.244 true 00:12:47.244 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:47.244 10:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.244 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.504 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:12:47.504 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:12:47.762 true 00:12:47.762 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:47.762 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.762 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.021 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:12:48.021 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:12:48.021 true 00:12:48.021 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:48.021 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.282 10:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.542 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:12:48.542 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:12:48.542 true 00:12:48.542 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:48.542 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.803 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.803 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:12:48.803 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:12:49.063 true 00:12:49.063 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:49.063 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.390 10:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.390 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:12:49.391 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:12:49.651 true 00:12:49.651 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:49.651 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.652 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.911 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:12:49.911 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:12:49.911 true 00:12:49.911 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:49.911 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.170 10:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.430 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:12:50.430 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:12:50.430 true 00:12:50.430 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:50.430 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.691 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.951 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:12:50.951 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:12:50.951 true 00:12:50.951 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:50.951 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.211 10:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.211 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:12:51.211 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:12:51.469 true 00:12:51.469 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:51.469 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.727 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.727 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:12:51.727 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:12:51.986 true 00:12:51.986 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:51.986 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.986 10:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.248 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:12:52.248 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:12:52.506 true 00:12:52.506 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:52.506 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.506 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.767 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:12:52.767 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:12:52.767 true 00:12:52.767 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:52.767 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.025 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.025 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:12:53.025 10:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:12:53.283 true 00:12:53.283 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:53.283 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.542 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.542 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:12:53.542 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:12:53.800 true 00:12:53.800 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:53.800 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.800 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.059 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:12:54.059 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:12:54.059 true 00:12:54.318 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:54.318 10:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.318 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.576 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:12:54.576 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:12:54.576 true 00:12:54.576 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:54.576 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.833 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.833 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:12:54.833 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:12:55.090 true 00:12:55.090 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:55.090 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.090 10:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.348 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:12:55.348 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:12:55.348 true 00:12:55.606 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:55.606 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.606 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.895 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:12:55.895 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:12:55.895 true 00:12:55.895 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:55.895 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.153 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.153 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:12:56.153 10:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:12:56.411 true 00:12:56.411 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:56.411 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.411 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.669 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:12:56.669 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:12:56.669 true 00:12:56.669 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:56.669 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.927 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.927 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:12:56.927 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:12:57.186 true 00:12:57.186 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:57.186 10:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.446 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.446 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:12:57.446 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:12:57.705 true 00:12:57.705 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:57.705 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.705 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.963 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:12:57.963 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:12:57.963 true 00:12:57.963 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:57.963 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.221 10:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.221 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:12:58.221 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:12:58.480 true 00:12:58.480 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:58.480 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.738 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.738 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:12:58.738 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:12:58.996 true 00:12:58.996 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:58.996 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.996 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.256 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:12:59.256 10:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:12:59.256 true 00:12:59.256 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:59.256 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.514 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.514 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:12:59.514 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:12:59.772 true 00:12:59.772 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:12:59.772 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.030 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.030 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:13:00.030 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:13:00.288 true 00:13:00.288 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:00.288 10:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.288 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.545 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:13:00.545 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:13:00.545 true 00:13:00.545 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:00.545 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.803 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.061 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:13:01.061 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:13:01.061 true 00:13:01.061 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:01.061 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.318 10:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.319 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:13:01.319 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:13:01.576 true 00:13:01.576 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:01.576 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.576 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.834 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1064 00:13:01.834 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:13:01.834 true 00:13:01.834 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:01.834 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.091 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.091 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1065 00:13:02.091 10:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:13:02.350 true 00:13:02.350 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:02.350 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.646 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.646 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1066 00:13:02.646 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:13:02.907 true 00:13:02.907 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:02.907 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.907 Initializing NVMe Controllers 00:13:02.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:02.907 Controller IO queue size 128, less than required. 00:13:02.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:02.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:02.907 Initialization complete. Launching workers. 00:13:02.907 ======================================================== 00:13:02.907 Latency(us) 00:13:02.908 Device Information : IOPS MiB/s Average min max 00:13:02.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28552.94 13.94 4483.02 3054.80 9480.74 00:13:02.908 ======================================================== 00:13:02.908 Total : 28552.94 13.94 4483.02 3054.80 9480.74 00:13:02.908 00:13:02.908 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.166 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1067 00:13:03.166 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1067 00:13:03.166 true 00:13:03.166 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2594056 00:13:03.166 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2594056) - No such process 00:13:03.166 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2594056 00:13:03.166 10:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.424 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.424 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:03.424 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:03.424 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:03.424 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.424 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:03.682 null0 00:13:03.682 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.682 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.682 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:03.682 null1 00:13:03.682 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.682 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.682 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:03.939 null2 00:13:03.939 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.939 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.939 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:03.939 null3 00:13:03.939 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.939 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.939 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:04.197 null4 00:13:04.197 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:04.197 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.197 10:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:04.197 null5 00:13:04.197 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:04.197 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.197 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:04.456 null6 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:04.456 null7 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:04.456 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.716 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2600451 2600452 2600454 2600455 2600456 2600458 2600460 2600462 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:04.717 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.976 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:05.234 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.234 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.234 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.234 10:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.234 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.235 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:05.493 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.751 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.752 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.010 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.267 10:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.267 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.267 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.267 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.267 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.267 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.267 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.268 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.525 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.526 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.785 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.043 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.300 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.300 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.300 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.300 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.301 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.558 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.815 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.072 rmmod nvme_tcp 00:13:08.072 rmmod nvme_fabrics 00:13:08.072 rmmod nvme_keyring 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2593722 ']' 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2593722 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 2593722 ']' 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 2593722 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2593722 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2593722' 00:13:08.072 killing process with pid 2593722 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 2593722 00:13:08.072 [2024-05-15 10:32:23.863320] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:08.072 10:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 2593722 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.639 10:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.542 10:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.542 00:13:10.542 real 0m45.718s 00:13:10.542 user 3m11.198s 00:13:10.542 sys 0m15.131s 00:13:10.542 10:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:10.542 10:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 ************************************ 00:13:10.542 END TEST nvmf_ns_hotplug_stress 00:13:10.542 ************************************ 00:13:10.801 10:32:26 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:10.801 10:32:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:10.801 10:32:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:10.801 10:32:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.801 ************************************ 00:13:10.801 START TEST nvmf_connect_stress 00:13:10.801 ************************************ 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:10.801 * Looking for test storage... 00:13:10.801 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:10.801 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.802 10:32:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.802 10:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.362 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:17.363 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:17.363 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:17.363 Found net devices under 0000:27:00.0: cvl_0_0 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:17.363 Found net devices under 0000:27:00.1: cvl_0_1 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:13:17.363 00:13:17.363 --- 10.0.0.2 ping statistics --- 00:13:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.363 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:13:17.363 00:13:17.363 --- 10.0.0.1 ping statistics --- 00:13:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.363 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2605390 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2605390 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 2605390 ']' 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:17.363 10:32:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.363 [2024-05-15 10:32:32.751841] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:13:17.363 [2024-05-15 10:32:32.751948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.363 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.363 [2024-05-15 10:32:32.872534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.363 [2024-05-15 10:32:32.974061] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.363 [2024-05-15 10:32:32.974098] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.363 [2024-05-15 10:32:32.974109] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.363 [2024-05-15 10:32:32.974119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.363 [2024-05-15 10:32:32.974127] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.363 [2024-05-15 10:32:32.974186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.363 [2024-05-15 10:32:32.974297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.363 [2024-05-15 10:32:32.974309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.623 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.623 [2024-05-15 10:32:33.495998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.883 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 [2024-05-15 10:32:33.534498] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:17.884 [2024-05-15 10:32:33.534820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 NULL1 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2605697 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.884 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.144 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.144 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:18.144 10:32:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.144 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.144 10:32:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.712 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.712 10:32:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:18.712 10:32:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.712 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.712 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.969 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.969 10:32:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:18.969 10:32:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.969 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.969 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.228 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.228 10:32:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:19.228 10:32:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.228 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.228 10:32:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.488 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.488 10:32:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:19.488 10:32:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.488 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.488 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.748 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.748 10:32:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:19.748 10:32:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.748 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.748 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.315 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.315 10:32:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:20.315 10:32:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.315 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.315 10:32:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.572 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.572 10:32:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:20.572 10:32:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.572 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.572 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.834 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.834 10:32:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:20.834 10:32:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.834 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.834 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.129 10:32:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:21.129 10:32:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.129 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.129 10:32:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.389 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.389 10:32:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:21.389 10:32:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.389 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.389 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.648 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.648 10:32:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:21.648 10:32:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.648 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.648 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.214 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.214 10:32:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:22.214 10:32:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.214 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.214 10:32:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.474 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.474 10:32:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:22.474 10:32:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.474 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.474 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.734 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.734 10:32:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:22.734 10:32:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.734 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.734 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.993 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.993 10:32:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:22.993 10:32:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.993 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.994 10:32:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.252 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.252 10:32:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:23.252 10:32:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.252 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.252 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.819 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.819 10:32:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:23.819 10:32:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.819 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.819 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.079 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.079 10:32:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:24.079 10:32:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.079 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.079 10:32:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.339 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.339 10:32:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:24.339 10:32:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.339 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.339 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.599 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.599 10:32:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:24.599 10:32:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.599 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.599 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.857 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.857 10:32:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:24.857 10:32:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.857 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.857 10:32:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.424 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:25.424 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:25.424 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.424 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.424 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.684 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:25.684 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:25.684 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.684 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.684 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.943 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:25.943 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:25.943 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.943 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.943 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.201 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.201 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:26.201 10:32:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.201 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.201 10:32:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.458 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.458 10:32:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:26.458 10:32:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.458 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.458 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.024 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:27.024 10:32:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:27.024 10:32:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.024 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:27.024 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.283 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:27.283 10:32:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:27.283 10:32:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.283 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:27.283 10:32:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.542 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:27.542 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:27.542 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.542 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:27.542 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.801 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:27.801 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:27.801 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.801 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:27.801 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.801 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605697 00:13:28.059 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2605697) - No such process 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2605697 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.059 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.059 rmmod nvme_tcp 00:13:28.317 rmmod nvme_fabrics 00:13:28.317 rmmod nvme_keyring 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2605390 ']' 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2605390 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 2605390 ']' 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 2605390 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:28.317 10:32:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2605390 00:13:28.317 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:28.317 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:28.317 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2605390' 00:13:28.317 killing process with pid 2605390 00:13:28.317 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 2605390 00:13:28.317 [2024-05-15 10:32:44.010972] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:28.317 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 2605390 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.887 10:32:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.796 10:32:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.796 00:13:30.796 real 0m20.058s 00:13:30.796 user 0m44.033s 00:13:30.796 sys 0m6.203s 00:13:30.796 10:32:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:30.796 10:32:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.796 ************************************ 00:13:30.796 END TEST nvmf_connect_stress 00:13:30.796 ************************************ 00:13:30.796 10:32:46 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.796 10:32:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:30.796 10:32:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:30.796 10:32:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.796 ************************************ 00:13:30.796 START TEST nvmf_fused_ordering 00:13:30.796 ************************************ 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.796 * Looking for test storage... 00:13:30.796 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:30.796 10:32:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.797 10:32:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:37.369 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:37.369 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:37.369 Found net devices under 0000:27:00.0: cvl_0_0 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:37.369 Found net devices under 0000:27:00.1: cvl_0_1 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:13:37.369 00:13:37.369 --- 10.0.0.2 ping statistics --- 00:13:37.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.369 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:13:37.369 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:13:37.369 00:13:37.369 --- 10.0.0.1 ping statistics --- 00:13:37.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.370 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2611655 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2611655 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 2611655 ']' 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:37.370 10:32:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 [2024-05-15 10:32:52.470540] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:13:37.370 [2024-05-15 10:32:52.470664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.370 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.370 [2024-05-15 10:32:52.610283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.370 [2024-05-15 10:32:52.706702] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.370 [2024-05-15 10:32:52.706753] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.370 [2024-05-15 10:32:52.706763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.370 [2024-05-15 10:32:52.706773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.370 [2024-05-15 10:32:52.706781] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.370 [2024-05-15 10:32:52.706815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 [2024-05-15 10:32:53.234367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.370 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.630 [2024-05-15 10:32:53.254311] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:37.630 [2024-05-15 10:32:53.254632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.630 NULL1 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.630 10:32:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:37.630 [2024-05-15 10:32:53.320748] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:13:37.630 [2024-05-15 10:32:53.320826] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2611964 ] 00:13:37.630 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.890 Attached to nqn.2016-06.io.spdk:cnode1 00:13:37.890 Namespace ID: 1 size: 1GB 00:13:37.890 fused_ordering(0) 00:13:37.890 fused_ordering(1) 00:13:37.890 fused_ordering(2) 00:13:37.891 fused_ordering(3) 00:13:37.891 fused_ordering(4) 00:13:37.891 fused_ordering(5) 00:13:37.891 fused_ordering(6) 00:13:37.891 fused_ordering(7) 00:13:37.891 fused_ordering(8) 00:13:37.891 fused_ordering(9) 00:13:37.891 fused_ordering(10) 00:13:37.891 fused_ordering(11) 00:13:37.891 fused_ordering(12) 00:13:37.891 fused_ordering(13) 00:13:37.891 fused_ordering(14) 00:13:37.891 fused_ordering(15) 00:13:37.891 fused_ordering(16) 00:13:37.891 fused_ordering(17) 00:13:37.891 fused_ordering(18) 00:13:37.891 fused_ordering(19) 00:13:37.891 fused_ordering(20) 00:13:37.891 fused_ordering(21) 00:13:37.891 fused_ordering(22) 00:13:37.891 fused_ordering(23) 00:13:37.891 fused_ordering(24) 00:13:37.891 fused_ordering(25) 00:13:37.891 fused_ordering(26) 00:13:37.891 fused_ordering(27) 00:13:37.891 fused_ordering(28) 00:13:37.891 fused_ordering(29) 00:13:37.891 fused_ordering(30) 00:13:37.891 fused_ordering(31) 00:13:37.891 fused_ordering(32) 00:13:37.891 fused_ordering(33) 00:13:37.891 fused_ordering(34) 00:13:37.891 fused_ordering(35) 00:13:37.891 fused_ordering(36) 00:13:37.891 fused_ordering(37) 00:13:37.891 fused_ordering(38) 00:13:37.891 fused_ordering(39) 00:13:37.891 fused_ordering(40) 00:13:37.891 fused_ordering(41) 00:13:37.891 fused_ordering(42) 00:13:37.891 fused_ordering(43) 00:13:37.891 fused_ordering(44) 00:13:37.891 fused_ordering(45) 00:13:37.891 fused_ordering(46) 00:13:37.891 fused_ordering(47) 00:13:37.891 fused_ordering(48) 00:13:37.891 fused_ordering(49) 00:13:37.891 fused_ordering(50) 00:13:37.891 fused_ordering(51) 00:13:37.891 fused_ordering(52) 00:13:37.891 fused_ordering(53) 00:13:37.891 fused_ordering(54) 00:13:37.891 fused_ordering(55) 00:13:37.891 fused_ordering(56) 00:13:37.891 fused_ordering(57) 00:13:37.891 fused_ordering(58) 00:13:37.891 fused_ordering(59) 00:13:37.891 fused_ordering(60) 00:13:37.891 fused_ordering(61) 00:13:37.891 fused_ordering(62) 00:13:37.891 fused_ordering(63) 00:13:37.891 fused_ordering(64) 00:13:37.891 fused_ordering(65) 00:13:37.891 fused_ordering(66) 00:13:37.891 fused_ordering(67) 00:13:37.891 fused_ordering(68) 00:13:37.891 fused_ordering(69) 00:13:37.891 fused_ordering(70) 00:13:37.891 fused_ordering(71) 00:13:37.891 fused_ordering(72) 00:13:37.891 fused_ordering(73) 00:13:37.891 fused_ordering(74) 00:13:37.891 fused_ordering(75) 00:13:37.891 fused_ordering(76) 00:13:37.891 fused_ordering(77) 00:13:37.891 fused_ordering(78) 00:13:37.891 fused_ordering(79) 00:13:37.891 fused_ordering(80) 00:13:37.891 fused_ordering(81) 00:13:37.891 fused_ordering(82) 00:13:37.891 fused_ordering(83) 00:13:37.891 fused_ordering(84) 00:13:37.891 fused_ordering(85) 00:13:37.891 fused_ordering(86) 00:13:37.891 fused_ordering(87) 00:13:37.891 fused_ordering(88) 00:13:37.891 fused_ordering(89) 00:13:37.891 fused_ordering(90) 00:13:37.891 fused_ordering(91) 00:13:37.891 fused_ordering(92) 00:13:37.891 fused_ordering(93) 00:13:37.891 fused_ordering(94) 00:13:37.891 fused_ordering(95) 00:13:37.891 fused_ordering(96) 00:13:37.891 fused_ordering(97) 00:13:37.891 fused_ordering(98) 00:13:37.891 fused_ordering(99) 00:13:37.891 fused_ordering(100) 00:13:37.891 fused_ordering(101) 00:13:37.891 fused_ordering(102) 00:13:37.891 fused_ordering(103) 00:13:37.891 fused_ordering(104) 00:13:37.891 fused_ordering(105) 00:13:37.891 fused_ordering(106) 00:13:37.891 fused_ordering(107) 00:13:37.891 fused_ordering(108) 00:13:37.891 fused_ordering(109) 00:13:37.891 fused_ordering(110) 00:13:37.891 fused_ordering(111) 00:13:37.891 fused_ordering(112) 00:13:37.891 fused_ordering(113) 00:13:37.891 fused_ordering(114) 00:13:37.891 fused_ordering(115) 00:13:37.891 fused_ordering(116) 00:13:37.891 fused_ordering(117) 00:13:37.891 fused_ordering(118) 00:13:37.891 fused_ordering(119) 00:13:37.891 fused_ordering(120) 00:13:37.891 fused_ordering(121) 00:13:37.891 fused_ordering(122) 00:13:37.891 fused_ordering(123) 00:13:37.891 fused_ordering(124) 00:13:37.891 fused_ordering(125) 00:13:37.891 fused_ordering(126) 00:13:37.891 fused_ordering(127) 00:13:37.891 fused_ordering(128) 00:13:37.891 fused_ordering(129) 00:13:37.891 fused_ordering(130) 00:13:37.891 fused_ordering(131) 00:13:37.891 fused_ordering(132) 00:13:37.891 fused_ordering(133) 00:13:37.891 fused_ordering(134) 00:13:37.891 fused_ordering(135) 00:13:37.891 fused_ordering(136) 00:13:37.891 fused_ordering(137) 00:13:37.891 fused_ordering(138) 00:13:37.891 fused_ordering(139) 00:13:37.891 fused_ordering(140) 00:13:37.891 fused_ordering(141) 00:13:37.891 fused_ordering(142) 00:13:37.891 fused_ordering(143) 00:13:37.891 fused_ordering(144) 00:13:37.891 fused_ordering(145) 00:13:37.891 fused_ordering(146) 00:13:37.891 fused_ordering(147) 00:13:37.891 fused_ordering(148) 00:13:37.891 fused_ordering(149) 00:13:37.891 fused_ordering(150) 00:13:37.891 fused_ordering(151) 00:13:37.891 fused_ordering(152) 00:13:37.891 fused_ordering(153) 00:13:37.891 fused_ordering(154) 00:13:37.891 fused_ordering(155) 00:13:37.891 fused_ordering(156) 00:13:37.891 fused_ordering(157) 00:13:37.891 fused_ordering(158) 00:13:37.891 fused_ordering(159) 00:13:37.891 fused_ordering(160) 00:13:37.891 fused_ordering(161) 00:13:37.891 fused_ordering(162) 00:13:37.891 fused_ordering(163) 00:13:37.891 fused_ordering(164) 00:13:37.891 fused_ordering(165) 00:13:37.891 fused_ordering(166) 00:13:37.891 fused_ordering(167) 00:13:37.891 fused_ordering(168) 00:13:37.891 fused_ordering(169) 00:13:37.891 fused_ordering(170) 00:13:37.891 fused_ordering(171) 00:13:37.891 fused_ordering(172) 00:13:37.891 fused_ordering(173) 00:13:37.891 fused_ordering(174) 00:13:37.891 fused_ordering(175) 00:13:37.891 fused_ordering(176) 00:13:37.891 fused_ordering(177) 00:13:37.891 fused_ordering(178) 00:13:37.891 fused_ordering(179) 00:13:37.891 fused_ordering(180) 00:13:37.891 fused_ordering(181) 00:13:37.891 fused_ordering(182) 00:13:37.891 fused_ordering(183) 00:13:37.891 fused_ordering(184) 00:13:37.891 fused_ordering(185) 00:13:37.891 fused_ordering(186) 00:13:37.891 fused_ordering(187) 00:13:37.891 fused_ordering(188) 00:13:37.891 fused_ordering(189) 00:13:37.891 fused_ordering(190) 00:13:37.891 fused_ordering(191) 00:13:37.891 fused_ordering(192) 00:13:37.891 fused_ordering(193) 00:13:37.891 fused_ordering(194) 00:13:37.891 fused_ordering(195) 00:13:37.891 fused_ordering(196) 00:13:37.891 fused_ordering(197) 00:13:37.891 fused_ordering(198) 00:13:37.891 fused_ordering(199) 00:13:37.891 fused_ordering(200) 00:13:37.891 fused_ordering(201) 00:13:37.891 fused_ordering(202) 00:13:37.891 fused_ordering(203) 00:13:37.891 fused_ordering(204) 00:13:37.891 fused_ordering(205) 00:13:38.151 fused_ordering(206) 00:13:38.151 fused_ordering(207) 00:13:38.151 fused_ordering(208) 00:13:38.151 fused_ordering(209) 00:13:38.151 fused_ordering(210) 00:13:38.151 fused_ordering(211) 00:13:38.151 fused_ordering(212) 00:13:38.151 fused_ordering(213) 00:13:38.151 fused_ordering(214) 00:13:38.151 fused_ordering(215) 00:13:38.151 fused_ordering(216) 00:13:38.151 fused_ordering(217) 00:13:38.151 fused_ordering(218) 00:13:38.151 fused_ordering(219) 00:13:38.151 fused_ordering(220) 00:13:38.151 fused_ordering(221) 00:13:38.151 fused_ordering(222) 00:13:38.151 fused_ordering(223) 00:13:38.151 fused_ordering(224) 00:13:38.151 fused_ordering(225) 00:13:38.151 fused_ordering(226) 00:13:38.151 fused_ordering(227) 00:13:38.151 fused_ordering(228) 00:13:38.151 fused_ordering(229) 00:13:38.151 fused_ordering(230) 00:13:38.151 fused_ordering(231) 00:13:38.151 fused_ordering(232) 00:13:38.151 fused_ordering(233) 00:13:38.151 fused_ordering(234) 00:13:38.151 fused_ordering(235) 00:13:38.151 fused_ordering(236) 00:13:38.151 fused_ordering(237) 00:13:38.151 fused_ordering(238) 00:13:38.151 fused_ordering(239) 00:13:38.151 fused_ordering(240) 00:13:38.151 fused_ordering(241) 00:13:38.151 fused_ordering(242) 00:13:38.151 fused_ordering(243) 00:13:38.151 fused_ordering(244) 00:13:38.151 fused_ordering(245) 00:13:38.151 fused_ordering(246) 00:13:38.151 fused_ordering(247) 00:13:38.151 fused_ordering(248) 00:13:38.151 fused_ordering(249) 00:13:38.151 fused_ordering(250) 00:13:38.151 fused_ordering(251) 00:13:38.151 fused_ordering(252) 00:13:38.151 fused_ordering(253) 00:13:38.151 fused_ordering(254) 00:13:38.151 fused_ordering(255) 00:13:38.151 fused_ordering(256) 00:13:38.151 fused_ordering(257) 00:13:38.151 fused_ordering(258) 00:13:38.151 fused_ordering(259) 00:13:38.151 fused_ordering(260) 00:13:38.151 fused_ordering(261) 00:13:38.151 fused_ordering(262) 00:13:38.151 fused_ordering(263) 00:13:38.151 fused_ordering(264) 00:13:38.152 fused_ordering(265) 00:13:38.152 fused_ordering(266) 00:13:38.152 fused_ordering(267) 00:13:38.152 fused_ordering(268) 00:13:38.152 fused_ordering(269) 00:13:38.152 fused_ordering(270) 00:13:38.152 fused_ordering(271) 00:13:38.152 fused_ordering(272) 00:13:38.152 fused_ordering(273) 00:13:38.152 fused_ordering(274) 00:13:38.152 fused_ordering(275) 00:13:38.152 fused_ordering(276) 00:13:38.152 fused_ordering(277) 00:13:38.152 fused_ordering(278) 00:13:38.152 fused_ordering(279) 00:13:38.152 fused_ordering(280) 00:13:38.152 fused_ordering(281) 00:13:38.152 fused_ordering(282) 00:13:38.152 fused_ordering(283) 00:13:38.152 fused_ordering(284) 00:13:38.152 fused_ordering(285) 00:13:38.152 fused_ordering(286) 00:13:38.152 fused_ordering(287) 00:13:38.152 fused_ordering(288) 00:13:38.152 fused_ordering(289) 00:13:38.152 fused_ordering(290) 00:13:38.152 fused_ordering(291) 00:13:38.152 fused_ordering(292) 00:13:38.152 fused_ordering(293) 00:13:38.152 fused_ordering(294) 00:13:38.152 fused_ordering(295) 00:13:38.152 fused_ordering(296) 00:13:38.152 fused_ordering(297) 00:13:38.152 fused_ordering(298) 00:13:38.152 fused_ordering(299) 00:13:38.152 fused_ordering(300) 00:13:38.152 fused_ordering(301) 00:13:38.152 fused_ordering(302) 00:13:38.152 fused_ordering(303) 00:13:38.152 fused_ordering(304) 00:13:38.152 fused_ordering(305) 00:13:38.152 fused_ordering(306) 00:13:38.152 fused_ordering(307) 00:13:38.152 fused_ordering(308) 00:13:38.152 fused_ordering(309) 00:13:38.152 fused_ordering(310) 00:13:38.152 fused_ordering(311) 00:13:38.152 fused_ordering(312) 00:13:38.152 fused_ordering(313) 00:13:38.152 fused_ordering(314) 00:13:38.152 fused_ordering(315) 00:13:38.152 fused_ordering(316) 00:13:38.152 fused_ordering(317) 00:13:38.152 fused_ordering(318) 00:13:38.152 fused_ordering(319) 00:13:38.152 fused_ordering(320) 00:13:38.152 fused_ordering(321) 00:13:38.152 fused_ordering(322) 00:13:38.152 fused_ordering(323) 00:13:38.152 fused_ordering(324) 00:13:38.152 fused_ordering(325) 00:13:38.152 fused_ordering(326) 00:13:38.152 fused_ordering(327) 00:13:38.152 fused_ordering(328) 00:13:38.152 fused_ordering(329) 00:13:38.152 fused_ordering(330) 00:13:38.152 fused_ordering(331) 00:13:38.152 fused_ordering(332) 00:13:38.152 fused_ordering(333) 00:13:38.152 fused_ordering(334) 00:13:38.152 fused_ordering(335) 00:13:38.152 fused_ordering(336) 00:13:38.152 fused_ordering(337) 00:13:38.152 fused_ordering(338) 00:13:38.152 fused_ordering(339) 00:13:38.152 fused_ordering(340) 00:13:38.152 fused_ordering(341) 00:13:38.152 fused_ordering(342) 00:13:38.152 fused_ordering(343) 00:13:38.152 fused_ordering(344) 00:13:38.152 fused_ordering(345) 00:13:38.152 fused_ordering(346) 00:13:38.152 fused_ordering(347) 00:13:38.152 fused_ordering(348) 00:13:38.152 fused_ordering(349) 00:13:38.152 fused_ordering(350) 00:13:38.152 fused_ordering(351) 00:13:38.152 fused_ordering(352) 00:13:38.152 fused_ordering(353) 00:13:38.152 fused_ordering(354) 00:13:38.152 fused_ordering(355) 00:13:38.152 fused_ordering(356) 00:13:38.152 fused_ordering(357) 00:13:38.152 fused_ordering(358) 00:13:38.152 fused_ordering(359) 00:13:38.152 fused_ordering(360) 00:13:38.152 fused_ordering(361) 00:13:38.152 fused_ordering(362) 00:13:38.152 fused_ordering(363) 00:13:38.152 fused_ordering(364) 00:13:38.152 fused_ordering(365) 00:13:38.152 fused_ordering(366) 00:13:38.152 fused_ordering(367) 00:13:38.152 fused_ordering(368) 00:13:38.152 fused_ordering(369) 00:13:38.152 fused_ordering(370) 00:13:38.152 fused_ordering(371) 00:13:38.152 fused_ordering(372) 00:13:38.152 fused_ordering(373) 00:13:38.152 fused_ordering(374) 00:13:38.152 fused_ordering(375) 00:13:38.152 fused_ordering(376) 00:13:38.152 fused_ordering(377) 00:13:38.152 fused_ordering(378) 00:13:38.152 fused_ordering(379) 00:13:38.152 fused_ordering(380) 00:13:38.152 fused_ordering(381) 00:13:38.152 fused_ordering(382) 00:13:38.152 fused_ordering(383) 00:13:38.152 fused_ordering(384) 00:13:38.152 fused_ordering(385) 00:13:38.152 fused_ordering(386) 00:13:38.152 fused_ordering(387) 00:13:38.152 fused_ordering(388) 00:13:38.152 fused_ordering(389) 00:13:38.152 fused_ordering(390) 00:13:38.152 fused_ordering(391) 00:13:38.152 fused_ordering(392) 00:13:38.152 fused_ordering(393) 00:13:38.152 fused_ordering(394) 00:13:38.152 fused_ordering(395) 00:13:38.152 fused_ordering(396) 00:13:38.152 fused_ordering(397) 00:13:38.152 fused_ordering(398) 00:13:38.152 fused_ordering(399) 00:13:38.152 fused_ordering(400) 00:13:38.152 fused_ordering(401) 00:13:38.152 fused_ordering(402) 00:13:38.152 fused_ordering(403) 00:13:38.152 fused_ordering(404) 00:13:38.152 fused_ordering(405) 00:13:38.152 fused_ordering(406) 00:13:38.152 fused_ordering(407) 00:13:38.152 fused_ordering(408) 00:13:38.152 fused_ordering(409) 00:13:38.152 fused_ordering(410) 00:13:38.412 fused_ordering(411) 00:13:38.412 fused_ordering(412) 00:13:38.412 fused_ordering(413) 00:13:38.412 fused_ordering(414) 00:13:38.412 fused_ordering(415) 00:13:38.412 fused_ordering(416) 00:13:38.412 fused_ordering(417) 00:13:38.412 fused_ordering(418) 00:13:38.412 fused_ordering(419) 00:13:38.412 fused_ordering(420) 00:13:38.412 fused_ordering(421) 00:13:38.412 fused_ordering(422) 00:13:38.412 fused_ordering(423) 00:13:38.412 fused_ordering(424) 00:13:38.412 fused_ordering(425) 00:13:38.412 fused_ordering(426) 00:13:38.412 fused_ordering(427) 00:13:38.412 fused_ordering(428) 00:13:38.412 fused_ordering(429) 00:13:38.412 fused_ordering(430) 00:13:38.412 fused_ordering(431) 00:13:38.412 fused_ordering(432) 00:13:38.412 fused_ordering(433) 00:13:38.412 fused_ordering(434) 00:13:38.412 fused_ordering(435) 00:13:38.412 fused_ordering(436) 00:13:38.412 fused_ordering(437) 00:13:38.412 fused_ordering(438) 00:13:38.412 fused_ordering(439) 00:13:38.412 fused_ordering(440) 00:13:38.412 fused_ordering(441) 00:13:38.412 fused_ordering(442) 00:13:38.412 fused_ordering(443) 00:13:38.412 fused_ordering(444) 00:13:38.412 fused_ordering(445) 00:13:38.412 fused_ordering(446) 00:13:38.412 fused_ordering(447) 00:13:38.412 fused_ordering(448) 00:13:38.412 fused_ordering(449) 00:13:38.412 fused_ordering(450) 00:13:38.412 fused_ordering(451) 00:13:38.412 fused_ordering(452) 00:13:38.412 fused_ordering(453) 00:13:38.412 fused_ordering(454) 00:13:38.412 fused_ordering(455) 00:13:38.412 fused_ordering(456) 00:13:38.412 fused_ordering(457) 00:13:38.412 fused_ordering(458) 00:13:38.412 fused_ordering(459) 00:13:38.412 fused_ordering(460) 00:13:38.412 fused_ordering(461) 00:13:38.412 fused_ordering(462) 00:13:38.412 fused_ordering(463) 00:13:38.412 fused_ordering(464) 00:13:38.412 fused_ordering(465) 00:13:38.412 fused_ordering(466) 00:13:38.412 fused_ordering(467) 00:13:38.412 fused_ordering(468) 00:13:38.412 fused_ordering(469) 00:13:38.412 fused_ordering(470) 00:13:38.412 fused_ordering(471) 00:13:38.412 fused_ordering(472) 00:13:38.412 fused_ordering(473) 00:13:38.412 fused_ordering(474) 00:13:38.412 fused_ordering(475) 00:13:38.412 fused_ordering(476) 00:13:38.412 fused_ordering(477) 00:13:38.412 fused_ordering(478) 00:13:38.412 fused_ordering(479) 00:13:38.412 fused_ordering(480) 00:13:38.412 fused_ordering(481) 00:13:38.412 fused_ordering(482) 00:13:38.412 fused_ordering(483) 00:13:38.412 fused_ordering(484) 00:13:38.412 fused_ordering(485) 00:13:38.412 fused_ordering(486) 00:13:38.412 fused_ordering(487) 00:13:38.412 fused_ordering(488) 00:13:38.412 fused_ordering(489) 00:13:38.412 fused_ordering(490) 00:13:38.412 fused_ordering(491) 00:13:38.412 fused_ordering(492) 00:13:38.412 fused_ordering(493) 00:13:38.412 fused_ordering(494) 00:13:38.412 fused_ordering(495) 00:13:38.412 fused_ordering(496) 00:13:38.412 fused_ordering(497) 00:13:38.412 fused_ordering(498) 00:13:38.412 fused_ordering(499) 00:13:38.412 fused_ordering(500) 00:13:38.412 fused_ordering(501) 00:13:38.412 fused_ordering(502) 00:13:38.412 fused_ordering(503) 00:13:38.412 fused_ordering(504) 00:13:38.412 fused_ordering(505) 00:13:38.412 fused_ordering(506) 00:13:38.412 fused_ordering(507) 00:13:38.412 fused_ordering(508) 00:13:38.412 fused_ordering(509) 00:13:38.412 fused_ordering(510) 00:13:38.412 fused_ordering(511) 00:13:38.412 fused_ordering(512) 00:13:38.412 fused_ordering(513) 00:13:38.412 fused_ordering(514) 00:13:38.412 fused_ordering(515) 00:13:38.412 fused_ordering(516) 00:13:38.412 fused_ordering(517) 00:13:38.412 fused_ordering(518) 00:13:38.412 fused_ordering(519) 00:13:38.412 fused_ordering(520) 00:13:38.412 fused_ordering(521) 00:13:38.412 fused_ordering(522) 00:13:38.412 fused_ordering(523) 00:13:38.412 fused_ordering(524) 00:13:38.412 fused_ordering(525) 00:13:38.412 fused_ordering(526) 00:13:38.412 fused_ordering(527) 00:13:38.412 fused_ordering(528) 00:13:38.412 fused_ordering(529) 00:13:38.412 fused_ordering(530) 00:13:38.412 fused_ordering(531) 00:13:38.412 fused_ordering(532) 00:13:38.412 fused_ordering(533) 00:13:38.412 fused_ordering(534) 00:13:38.412 fused_ordering(535) 00:13:38.412 fused_ordering(536) 00:13:38.412 fused_ordering(537) 00:13:38.412 fused_ordering(538) 00:13:38.412 fused_ordering(539) 00:13:38.412 fused_ordering(540) 00:13:38.412 fused_ordering(541) 00:13:38.412 fused_ordering(542) 00:13:38.412 fused_ordering(543) 00:13:38.412 fused_ordering(544) 00:13:38.412 fused_ordering(545) 00:13:38.412 fused_ordering(546) 00:13:38.412 fused_ordering(547) 00:13:38.412 fused_ordering(548) 00:13:38.412 fused_ordering(549) 00:13:38.412 fused_ordering(550) 00:13:38.412 fused_ordering(551) 00:13:38.412 fused_ordering(552) 00:13:38.412 fused_ordering(553) 00:13:38.412 fused_ordering(554) 00:13:38.412 fused_ordering(555) 00:13:38.412 fused_ordering(556) 00:13:38.412 fused_ordering(557) 00:13:38.412 fused_ordering(558) 00:13:38.412 fused_ordering(559) 00:13:38.412 fused_ordering(560) 00:13:38.412 fused_ordering(561) 00:13:38.412 fused_ordering(562) 00:13:38.412 fused_ordering(563) 00:13:38.412 fused_ordering(564) 00:13:38.412 fused_ordering(565) 00:13:38.412 fused_ordering(566) 00:13:38.412 fused_ordering(567) 00:13:38.412 fused_ordering(568) 00:13:38.412 fused_ordering(569) 00:13:38.412 fused_ordering(570) 00:13:38.412 fused_ordering(571) 00:13:38.412 fused_ordering(572) 00:13:38.412 fused_ordering(573) 00:13:38.412 fused_ordering(574) 00:13:38.412 fused_ordering(575) 00:13:38.412 fused_ordering(576) 00:13:38.412 fused_ordering(577) 00:13:38.412 fused_ordering(578) 00:13:38.412 fused_ordering(579) 00:13:38.412 fused_ordering(580) 00:13:38.412 fused_ordering(581) 00:13:38.412 fused_ordering(582) 00:13:38.412 fused_ordering(583) 00:13:38.412 fused_ordering(584) 00:13:38.412 fused_ordering(585) 00:13:38.412 fused_ordering(586) 00:13:38.413 fused_ordering(587) 00:13:38.413 fused_ordering(588) 00:13:38.413 fused_ordering(589) 00:13:38.413 fused_ordering(590) 00:13:38.413 fused_ordering(591) 00:13:38.413 fused_ordering(592) 00:13:38.413 fused_ordering(593) 00:13:38.413 fused_ordering(594) 00:13:38.413 fused_ordering(595) 00:13:38.413 fused_ordering(596) 00:13:38.413 fused_ordering(597) 00:13:38.413 fused_ordering(598) 00:13:38.413 fused_ordering(599) 00:13:38.413 fused_ordering(600) 00:13:38.413 fused_ordering(601) 00:13:38.413 fused_ordering(602) 00:13:38.413 fused_ordering(603) 00:13:38.413 fused_ordering(604) 00:13:38.413 fused_ordering(605) 00:13:38.413 fused_ordering(606) 00:13:38.413 fused_ordering(607) 00:13:38.413 fused_ordering(608) 00:13:38.413 fused_ordering(609) 00:13:38.413 fused_ordering(610) 00:13:38.413 fused_ordering(611) 00:13:38.413 fused_ordering(612) 00:13:38.413 fused_ordering(613) 00:13:38.413 fused_ordering(614) 00:13:38.413 fused_ordering(615) 00:13:38.979 fused_ordering(616) 00:13:38.979 fused_ordering(617) 00:13:38.979 fused_ordering(618) 00:13:38.979 fused_ordering(619) 00:13:38.979 fused_ordering(620) 00:13:38.979 fused_ordering(621) 00:13:38.979 fused_ordering(622) 00:13:38.979 fused_ordering(623) 00:13:38.979 fused_ordering(624) 00:13:38.979 fused_ordering(625) 00:13:38.979 fused_ordering(626) 00:13:38.979 fused_ordering(627) 00:13:38.979 fused_ordering(628) 00:13:38.979 fused_ordering(629) 00:13:38.979 fused_ordering(630) 00:13:38.979 fused_ordering(631) 00:13:38.979 fused_ordering(632) 00:13:38.979 fused_ordering(633) 00:13:38.979 fused_ordering(634) 00:13:38.979 fused_ordering(635) 00:13:38.979 fused_ordering(636) 00:13:38.979 fused_ordering(637) 00:13:38.979 fused_ordering(638) 00:13:38.979 fused_ordering(639) 00:13:38.979 fused_ordering(640) 00:13:38.979 fused_ordering(641) 00:13:38.979 fused_ordering(642) 00:13:38.979 fused_ordering(643) 00:13:38.979 fused_ordering(644) 00:13:38.979 fused_ordering(645) 00:13:38.979 fused_ordering(646) 00:13:38.979 fused_ordering(647) 00:13:38.979 fused_ordering(648) 00:13:38.979 fused_ordering(649) 00:13:38.979 fused_ordering(650) 00:13:38.979 fused_ordering(651) 00:13:38.979 fused_ordering(652) 00:13:38.979 fused_ordering(653) 00:13:38.980 fused_ordering(654) 00:13:38.980 fused_ordering(655) 00:13:38.980 fused_ordering(656) 00:13:38.980 fused_ordering(657) 00:13:38.980 fused_ordering(658) 00:13:38.980 fused_ordering(659) 00:13:38.980 fused_ordering(660) 00:13:38.980 fused_ordering(661) 00:13:38.980 fused_ordering(662) 00:13:38.980 fused_ordering(663) 00:13:38.980 fused_ordering(664) 00:13:38.980 fused_ordering(665) 00:13:38.980 fused_ordering(666) 00:13:38.980 fused_ordering(667) 00:13:38.980 fused_ordering(668) 00:13:38.980 fused_ordering(669) 00:13:38.980 fused_ordering(670) 00:13:38.980 fused_ordering(671) 00:13:38.980 fused_ordering(672) 00:13:38.980 fused_ordering(673) 00:13:38.980 fused_ordering(674) 00:13:38.980 fused_ordering(675) 00:13:38.980 fused_ordering(676) 00:13:38.980 fused_ordering(677) 00:13:38.980 fused_ordering(678) 00:13:38.980 fused_ordering(679) 00:13:38.980 fused_ordering(680) 00:13:38.980 fused_ordering(681) 00:13:38.980 fused_ordering(682) 00:13:38.980 fused_ordering(683) 00:13:38.980 fused_ordering(684) 00:13:38.980 fused_ordering(685) 00:13:38.980 fused_ordering(686) 00:13:38.980 fused_ordering(687) 00:13:38.980 fused_ordering(688) 00:13:38.980 fused_ordering(689) 00:13:38.980 fused_ordering(690) 00:13:38.980 fused_ordering(691) 00:13:38.980 fused_ordering(692) 00:13:38.980 fused_ordering(693) 00:13:38.980 fused_ordering(694) 00:13:38.980 fused_ordering(695) 00:13:38.980 fused_ordering(696) 00:13:38.980 fused_ordering(697) 00:13:38.980 fused_ordering(698) 00:13:38.980 fused_ordering(699) 00:13:38.980 fused_ordering(700) 00:13:38.980 fused_ordering(701) 00:13:38.980 fused_ordering(702) 00:13:38.980 fused_ordering(703) 00:13:38.980 fused_ordering(704) 00:13:38.980 fused_ordering(705) 00:13:38.980 fused_ordering(706) 00:13:38.980 fused_ordering(707) 00:13:38.980 fused_ordering(708) 00:13:38.980 fused_ordering(709) 00:13:38.980 fused_ordering(710) 00:13:38.980 fused_ordering(711) 00:13:38.980 fused_ordering(712) 00:13:38.980 fused_ordering(713) 00:13:38.980 fused_ordering(714) 00:13:38.980 fused_ordering(715) 00:13:38.980 fused_ordering(716) 00:13:38.980 fused_ordering(717) 00:13:38.980 fused_ordering(718) 00:13:38.980 fused_ordering(719) 00:13:38.980 fused_ordering(720) 00:13:38.980 fused_ordering(721) 00:13:38.980 fused_ordering(722) 00:13:38.980 fused_ordering(723) 00:13:38.980 fused_ordering(724) 00:13:38.980 fused_ordering(725) 00:13:38.980 fused_ordering(726) 00:13:38.980 fused_ordering(727) 00:13:38.980 fused_ordering(728) 00:13:38.980 fused_ordering(729) 00:13:38.980 fused_ordering(730) 00:13:38.980 fused_ordering(731) 00:13:38.980 fused_ordering(732) 00:13:38.980 fused_ordering(733) 00:13:38.980 fused_ordering(734) 00:13:38.980 fused_ordering(735) 00:13:38.980 fused_ordering(736) 00:13:38.980 fused_ordering(737) 00:13:38.980 fused_ordering(738) 00:13:38.980 fused_ordering(739) 00:13:38.980 fused_ordering(740) 00:13:38.980 fused_ordering(741) 00:13:38.980 fused_ordering(742) 00:13:38.980 fused_ordering(743) 00:13:38.980 fused_ordering(744) 00:13:38.980 fused_ordering(745) 00:13:38.980 fused_ordering(746) 00:13:38.980 fused_ordering(747) 00:13:38.980 fused_ordering(748) 00:13:38.980 fused_ordering(749) 00:13:38.980 fused_ordering(750) 00:13:38.980 fused_ordering(751) 00:13:38.980 fused_ordering(752) 00:13:38.980 fused_ordering(753) 00:13:38.980 fused_ordering(754) 00:13:38.980 fused_ordering(755) 00:13:38.980 fused_ordering(756) 00:13:38.980 fused_ordering(757) 00:13:38.980 fused_ordering(758) 00:13:38.980 fused_ordering(759) 00:13:38.980 fused_ordering(760) 00:13:38.980 fused_ordering(761) 00:13:38.980 fused_ordering(762) 00:13:38.980 fused_ordering(763) 00:13:38.980 fused_ordering(764) 00:13:38.980 fused_ordering(765) 00:13:38.980 fused_ordering(766) 00:13:38.980 fused_ordering(767) 00:13:38.980 fused_ordering(768) 00:13:38.980 fused_ordering(769) 00:13:38.980 fused_ordering(770) 00:13:38.980 fused_ordering(771) 00:13:38.980 fused_ordering(772) 00:13:38.980 fused_ordering(773) 00:13:38.980 fused_ordering(774) 00:13:38.980 fused_ordering(775) 00:13:38.980 fused_ordering(776) 00:13:38.980 fused_ordering(777) 00:13:38.980 fused_ordering(778) 00:13:38.980 fused_ordering(779) 00:13:38.980 fused_ordering(780) 00:13:38.980 fused_ordering(781) 00:13:38.980 fused_ordering(782) 00:13:38.980 fused_ordering(783) 00:13:38.980 fused_ordering(784) 00:13:38.980 fused_ordering(785) 00:13:38.980 fused_ordering(786) 00:13:38.980 fused_ordering(787) 00:13:38.980 fused_ordering(788) 00:13:38.980 fused_ordering(789) 00:13:38.980 fused_ordering(790) 00:13:38.980 fused_ordering(791) 00:13:38.980 fused_ordering(792) 00:13:38.980 fused_ordering(793) 00:13:38.980 fused_ordering(794) 00:13:38.980 fused_ordering(795) 00:13:38.980 fused_ordering(796) 00:13:38.980 fused_ordering(797) 00:13:38.980 fused_ordering(798) 00:13:38.980 fused_ordering(799) 00:13:38.980 fused_ordering(800) 00:13:38.980 fused_ordering(801) 00:13:38.980 fused_ordering(802) 00:13:38.980 fused_ordering(803) 00:13:38.980 fused_ordering(804) 00:13:38.980 fused_ordering(805) 00:13:38.980 fused_ordering(806) 00:13:38.980 fused_ordering(807) 00:13:38.980 fused_ordering(808) 00:13:38.980 fused_ordering(809) 00:13:38.980 fused_ordering(810) 00:13:38.980 fused_ordering(811) 00:13:38.980 fused_ordering(812) 00:13:38.980 fused_ordering(813) 00:13:38.980 fused_ordering(814) 00:13:38.980 fused_ordering(815) 00:13:38.980 fused_ordering(816) 00:13:38.980 fused_ordering(817) 00:13:38.980 fused_ordering(818) 00:13:38.980 fused_ordering(819) 00:13:38.980 fused_ordering(820) 00:13:39.240 fused_ordering(821) 00:13:39.240 fused_ordering(822) 00:13:39.240 fused_ordering(823) 00:13:39.240 fused_ordering(824) 00:13:39.240 fused_ordering(825) 00:13:39.240 fused_ordering(826) 00:13:39.240 fused_ordering(827) 00:13:39.240 fused_ordering(828) 00:13:39.240 fused_ordering(829) 00:13:39.240 fused_ordering(830) 00:13:39.240 fused_ordering(831) 00:13:39.240 fused_ordering(832) 00:13:39.240 fused_ordering(833) 00:13:39.240 fused_ordering(834) 00:13:39.240 fused_ordering(835) 00:13:39.240 fused_ordering(836) 00:13:39.240 fused_ordering(837) 00:13:39.240 fused_ordering(838) 00:13:39.240 fused_ordering(839) 00:13:39.240 fused_ordering(840) 00:13:39.240 fused_ordering(841) 00:13:39.240 fused_ordering(842) 00:13:39.240 fused_ordering(843) 00:13:39.240 fused_ordering(844) 00:13:39.240 fused_ordering(845) 00:13:39.240 fused_ordering(846) 00:13:39.240 fused_ordering(847) 00:13:39.240 fused_ordering(848) 00:13:39.240 fused_ordering(849) 00:13:39.240 fused_ordering(850) 00:13:39.240 fused_ordering(851) 00:13:39.240 fused_ordering(852) 00:13:39.240 fused_ordering(853) 00:13:39.240 fused_ordering(854) 00:13:39.240 fused_ordering(855) 00:13:39.240 fused_ordering(856) 00:13:39.240 fused_ordering(857) 00:13:39.240 fused_ordering(858) 00:13:39.240 fused_ordering(859) 00:13:39.240 fused_ordering(860) 00:13:39.240 fused_ordering(861) 00:13:39.240 fused_ordering(862) 00:13:39.240 fused_ordering(863) 00:13:39.240 fused_ordering(864) 00:13:39.240 fused_ordering(865) 00:13:39.240 fused_ordering(866) 00:13:39.240 fused_ordering(867) 00:13:39.240 fused_ordering(868) 00:13:39.240 fused_ordering(869) 00:13:39.240 fused_ordering(870) 00:13:39.240 fused_ordering(871) 00:13:39.240 fused_ordering(872) 00:13:39.240 fused_ordering(873) 00:13:39.240 fused_ordering(874) 00:13:39.240 fused_ordering(875) 00:13:39.240 fused_ordering(876) 00:13:39.240 fused_ordering(877) 00:13:39.240 fused_ordering(878) 00:13:39.240 fused_ordering(879) 00:13:39.240 fused_ordering(880) 00:13:39.240 fused_ordering(881) 00:13:39.240 fused_ordering(882) 00:13:39.240 fused_ordering(883) 00:13:39.240 fused_ordering(884) 00:13:39.240 fused_ordering(885) 00:13:39.240 fused_ordering(886) 00:13:39.240 fused_ordering(887) 00:13:39.240 fused_ordering(888) 00:13:39.240 fused_ordering(889) 00:13:39.240 fused_ordering(890) 00:13:39.240 fused_ordering(891) 00:13:39.240 fused_ordering(892) 00:13:39.240 fused_ordering(893) 00:13:39.240 fused_ordering(894) 00:13:39.240 fused_ordering(895) 00:13:39.240 fused_ordering(896) 00:13:39.240 fused_ordering(897) 00:13:39.240 fused_ordering(898) 00:13:39.240 fused_ordering(899) 00:13:39.240 fused_ordering(900) 00:13:39.240 fused_ordering(901) 00:13:39.240 fused_ordering(902) 00:13:39.240 fused_ordering(903) 00:13:39.240 fused_ordering(904) 00:13:39.240 fused_ordering(905) 00:13:39.240 fused_ordering(906) 00:13:39.240 fused_ordering(907) 00:13:39.240 fused_ordering(908) 00:13:39.240 fused_ordering(909) 00:13:39.240 fused_ordering(910) 00:13:39.241 fused_ordering(911) 00:13:39.241 fused_ordering(912) 00:13:39.241 fused_ordering(913) 00:13:39.241 fused_ordering(914) 00:13:39.241 fused_ordering(915) 00:13:39.241 fused_ordering(916) 00:13:39.241 fused_ordering(917) 00:13:39.241 fused_ordering(918) 00:13:39.241 fused_ordering(919) 00:13:39.241 fused_ordering(920) 00:13:39.241 fused_ordering(921) 00:13:39.241 fused_ordering(922) 00:13:39.241 fused_ordering(923) 00:13:39.241 fused_ordering(924) 00:13:39.241 fused_ordering(925) 00:13:39.241 fused_ordering(926) 00:13:39.241 fused_ordering(927) 00:13:39.241 fused_ordering(928) 00:13:39.241 fused_ordering(929) 00:13:39.241 fused_ordering(930) 00:13:39.241 fused_ordering(931) 00:13:39.241 fused_ordering(932) 00:13:39.241 fused_ordering(933) 00:13:39.241 fused_ordering(934) 00:13:39.241 fused_ordering(935) 00:13:39.241 fused_ordering(936) 00:13:39.241 fused_ordering(937) 00:13:39.241 fused_ordering(938) 00:13:39.241 fused_ordering(939) 00:13:39.241 fused_ordering(940) 00:13:39.241 fused_ordering(941) 00:13:39.241 fused_ordering(942) 00:13:39.241 fused_ordering(943) 00:13:39.241 fused_ordering(944) 00:13:39.241 fused_ordering(945) 00:13:39.241 fused_ordering(946) 00:13:39.241 fused_ordering(947) 00:13:39.241 fused_ordering(948) 00:13:39.241 fused_ordering(949) 00:13:39.241 fused_ordering(950) 00:13:39.241 fused_ordering(951) 00:13:39.241 fused_ordering(952) 00:13:39.241 fused_ordering(953) 00:13:39.241 fused_ordering(954) 00:13:39.241 fused_ordering(955) 00:13:39.241 fused_ordering(956) 00:13:39.241 fused_ordering(957) 00:13:39.241 fused_ordering(958) 00:13:39.241 fused_ordering(959) 00:13:39.241 fused_ordering(960) 00:13:39.241 fused_ordering(961) 00:13:39.241 fused_ordering(962) 00:13:39.241 fused_ordering(963) 00:13:39.241 fused_ordering(964) 00:13:39.241 fused_ordering(965) 00:13:39.241 fused_ordering(966) 00:13:39.241 fused_ordering(967) 00:13:39.241 fused_ordering(968) 00:13:39.241 fused_ordering(969) 00:13:39.241 fused_ordering(970) 00:13:39.241 fused_ordering(971) 00:13:39.241 fused_ordering(972) 00:13:39.241 fused_ordering(973) 00:13:39.241 fused_ordering(974) 00:13:39.241 fused_ordering(975) 00:13:39.241 fused_ordering(976) 00:13:39.241 fused_ordering(977) 00:13:39.241 fused_ordering(978) 00:13:39.241 fused_ordering(979) 00:13:39.241 fused_ordering(980) 00:13:39.241 fused_ordering(981) 00:13:39.241 fused_ordering(982) 00:13:39.241 fused_ordering(983) 00:13:39.241 fused_ordering(984) 00:13:39.241 fused_ordering(985) 00:13:39.241 fused_ordering(986) 00:13:39.241 fused_ordering(987) 00:13:39.241 fused_ordering(988) 00:13:39.241 fused_ordering(989) 00:13:39.241 fused_ordering(990) 00:13:39.241 fused_ordering(991) 00:13:39.241 fused_ordering(992) 00:13:39.241 fused_ordering(993) 00:13:39.241 fused_ordering(994) 00:13:39.241 fused_ordering(995) 00:13:39.241 fused_ordering(996) 00:13:39.241 fused_ordering(997) 00:13:39.241 fused_ordering(998) 00:13:39.241 fused_ordering(999) 00:13:39.241 fused_ordering(1000) 00:13:39.241 fused_ordering(1001) 00:13:39.241 fused_ordering(1002) 00:13:39.241 fused_ordering(1003) 00:13:39.241 fused_ordering(1004) 00:13:39.241 fused_ordering(1005) 00:13:39.241 fused_ordering(1006) 00:13:39.241 fused_ordering(1007) 00:13:39.241 fused_ordering(1008) 00:13:39.241 fused_ordering(1009) 00:13:39.241 fused_ordering(1010) 00:13:39.241 fused_ordering(1011) 00:13:39.241 fused_ordering(1012) 00:13:39.241 fused_ordering(1013) 00:13:39.241 fused_ordering(1014) 00:13:39.241 fused_ordering(1015) 00:13:39.241 fused_ordering(1016) 00:13:39.241 fused_ordering(1017) 00:13:39.241 fused_ordering(1018) 00:13:39.241 fused_ordering(1019) 00:13:39.241 fused_ordering(1020) 00:13:39.241 fused_ordering(1021) 00:13:39.241 fused_ordering(1022) 00:13:39.241 fused_ordering(1023) 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.241 rmmod nvme_tcp 00:13:39.241 rmmod nvme_fabrics 00:13:39.241 rmmod nvme_keyring 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2611655 ']' 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2611655 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 2611655 ']' 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 2611655 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:39.241 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2611655 00:13:39.502 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:39.502 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:39.502 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2611655' 00:13:39.502 killing process with pid 2611655 00:13:39.502 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 2611655 00:13:39.502 [2024-05-15 10:32:55.122720] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:39.502 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 2611655 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.791 10:32:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.334 10:32:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:42.334 00:13:42.334 real 0m11.059s 00:13:42.334 user 0m5.876s 00:13:42.334 sys 0m5.166s 00:13:42.334 10:32:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:42.334 10:32:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.334 ************************************ 00:13:42.334 END TEST nvmf_fused_ordering 00:13:42.334 ************************************ 00:13:42.334 10:32:57 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:42.334 10:32:57 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:42.334 10:32:57 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:42.334 10:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.334 ************************************ 00:13:42.334 START TEST nvmf_delete_subsystem 00:13:42.334 ************************************ 00:13:42.334 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:42.334 * Looking for test storage... 00:13:42.334 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:42.334 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.334 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:42.334 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.335 10:32:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.618 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:47.619 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:47.619 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:47.619 Found net devices under 0000:27:00.0: cvl_0_0 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:47.619 Found net devices under 0000:27:00.1: cvl_0_1 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.619 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:13:47.879 00:13:47.879 --- 10.0.0.2 ping statistics --- 00:13:47.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.879 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:13:47.879 00:13:47.879 --- 10.0.0.1 ping statistics --- 00:13:47.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.879 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.879 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2616316 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2616316 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 2616316 ']' 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.880 10:33:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:48.139 [2024-05-15 10:33:03.761449] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:13:48.139 [2024-05-15 10:33:03.761581] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.139 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.139 [2024-05-15 10:33:03.875287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.139 [2024-05-15 10:33:03.974004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.139 [2024-05-15 10:33:03.974057] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.139 [2024-05-15 10:33:03.974067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.139 [2024-05-15 10:33:03.974077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.139 [2024-05-15 10:33:03.974085] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.139 [2024-05-15 10:33:03.974145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.139 [2024-05-15 10:33:03.974146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.709 [2024-05-15 10:33:04.517172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.709 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 [2024-05-15 10:33:04.533099] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:48.710 [2024-05-15 10:33:04.533426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 NULL1 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 Delay0 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2616570 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:48.710 10:33:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:48.970 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.970 [2024-05-15 10:33:04.648182] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:50.874 10:33:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.874 10:33:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.874 10:33:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 [2024-05-15 10:33:06.869156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(5) to be set 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 starting I/O failed: -6 00:13:51.134 Write completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.134 Read completed with error (sct=0, sc=8) 00:13:51.135 starting I/O failed: -6 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 starting I/O failed: -6 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 [2024-05-15 10:33:06.870039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025100 is same with the state(5) to be set 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Write completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:51.135 Read completed with error (sct=0, sc=8) 00:13:52.076 [2024-05-15 10:33:07.828279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000024c00 is same with the state(5) to be set 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Write completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.076 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 [2024-05-15 10:33:07.866894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030780 is same with the state(5) to be set 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 [2024-05-15 10:33:07.869559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025380 is same with the state(5) to be set 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 [2024-05-15 10:33:07.869807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025880 is same with the state(5) to be set 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 Read completed with error (sct=0, sc=8) 00:13:52.077 Write completed with error (sct=0, sc=8) 00:13:52.077 [2024-05-15 10:33:07.871717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030280 is same with the state(5) to be set 00:13:52.077 Initializing NVMe Controllers 00:13:52.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.077 Controller IO queue size 128, less than required. 00:13:52.077 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:52.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:52.077 Initialization complete. Launching workers. 00:13:52.077 ======================================================== 00:13:52.077 Latency(us) 00:13:52.077 Device Information : IOPS MiB/s Average min max 00:13:52.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.92 0.08 951289.97 438.83 2002358.47 00:13:52.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.94 0.08 907739.93 517.16 1010594.31 00:13:52.077 ======================================================== 00:13:52.077 Total : 330.86 0.16 929711.12 438.83 2002358.47 00:13:52.077 00:13:52.077 [2024-05-15 10:33:07.872624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000024c00 (9): Bad file descriptor 00:13:52.077 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:52.077 10:33:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.077 10:33:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:52.077 10:33:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2616570 00:13:52.077 10:33:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2616570 00:13:52.646 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2616570) - No such process 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2616570 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2616570 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2616570 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:13:52.646 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.647 [2024-05-15 10:33:08.398793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2617762 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:52.647 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:52.647 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.647 [2024-05-15 10:33:08.509793] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:53.216 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:53.216 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:53.216 10:33:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:53.786 10:33:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:53.786 10:33:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:53.786 10:33:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:54.354 10:33:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:54.354 10:33:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:54.354 10:33:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:54.613 10:33:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:54.613 10:33:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:54.613 10:33:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:55.181 10:33:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:55.181 10:33:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:55.181 10:33:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:55.750 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:55.750 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:55.750 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:56.008 Initializing NVMe Controllers 00:13:56.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.008 Controller IO queue size 128, less than required. 00:13:56.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:56.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:56.008 Initialization complete. Launching workers. 00:13:56.008 ======================================================== 00:13:56.008 Latency(us) 00:13:56.008 Device Information : IOPS MiB/s Average min max 00:13:56.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003610.80 1000171.96 1011155.96 00:13:56.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004131.70 1000267.73 1042211.41 00:13:56.008 ======================================================== 00:13:56.008 Total : 256.00 0.12 1003871.25 1000171.96 1042211.41 00:13:56.008 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2617762 00:13:56.267 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2617762) - No such process 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2617762 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.267 rmmod nvme_tcp 00:13:56.267 rmmod nvme_fabrics 00:13:56.267 rmmod nvme_keyring 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2616316 ']' 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2616316 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 2616316 ']' 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 2616316 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:56.267 10:33:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2616316 00:13:56.267 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:56.267 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:56.267 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2616316' 00:13:56.267 killing process with pid 2616316 00:13:56.267 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 2616316 00:13:56.267 [2024-05-15 10:33:12.036418] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:56.267 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 2616316 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.836 10:33:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.750 10:33:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.750 00:13:58.750 real 0m16.862s 00:13:58.750 user 0m30.996s 00:13:58.750 sys 0m5.199s 00:13:58.750 10:33:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:58.750 10:33:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 ************************************ 00:13:58.750 END TEST nvmf_delete_subsystem 00:13:58.750 ************************************ 00:13:58.750 10:33:14 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:58.750 10:33:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:58.750 10:33:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:58.750 10:33:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.008 ************************************ 00:13:59.008 START TEST nvmf_ns_masking 00:13:59.008 ************************************ 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:59.008 * Looking for test storage... 00:13:59.008 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.008 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=66259042-a38d-4450-b00d-a95dccec7c34 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.009 10:33:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:04.317 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:04.317 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.317 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:04.318 Found net devices under 0000:27:00.0: cvl_0_0 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:04.318 Found net devices under 0000:27:00.1: cvl_0_1 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:14:04.318 00:14:04.318 --- 10.0.0.2 ping statistics --- 00:14:04.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.318 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:14:04.318 00:14:04.318 --- 10.0.0.1 ping statistics --- 00:14:04.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.318 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2622384 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2622384 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 2622384 ']' 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.318 10:33:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.318 [2024-05-15 10:33:19.695932] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:14:04.318 [2024-05-15 10:33:19.696033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.318 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.318 [2024-05-15 10:33:19.815102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.318 [2024-05-15 10:33:19.910132] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.318 [2024-05-15 10:33:19.910169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.318 [2024-05-15 10:33:19.910178] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.318 [2024-05-15 10:33:19.910187] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.318 [2024-05-15 10:33:19.910194] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.318 [2024-05-15 10:33:19.910271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.318 [2024-05-15 10:33:19.910372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.318 [2024-05-15 10:33:19.910472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.318 [2024-05-15 10:33:19.910482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.579 10:33:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.837 [2024-05-15 10:33:20.574262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.837 10:33:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:04.837 10:33:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:04.837 10:33:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:05.095 Malloc1 00:14:05.095 10:33:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:05.095 Malloc2 00:14:05.353 10:33:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:05.353 10:33:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:05.611 10:33:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.611 [2024-05-15 10:33:21.394016] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:05.611 [2024-05-15 10:33:21.394286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.611 10:33:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:05.611 10:33:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 66259042-a38d-4450-b00d-a95dccec7c34 -a 10.0.0.2 -s 4420 -i 4 00:14:05.870 10:33:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.870 10:33:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:14:05.870 10:33:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.870 10:33:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:05.870 10:33:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:07.771 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:08.030 [ 0]:0x1 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=efd1a177284f4b04b6df2077b2d1af3a 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ efd1a177284f4b04b6df2077b2d1af3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:08.030 [ 0]:0x1 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.030 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=efd1a177284f4b04b6df2077b2d1af3a 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ efd1a177284f4b04b6df2077b2d1af3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:08.290 [ 1]:0x2 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.290 10:33:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.290 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:08.290 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.290 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:08.290 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.549 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.549 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 66259042-a38d-4450-b00d-a95dccec7c34 -a 10.0.0.2 -s 4420 -i 4 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:14:08.807 10:33:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:11.346 [ 0]:0x2 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:11.346 [ 0]:0x1 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=efd1a177284f4b04b6df2077b2d1af3a 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ efd1a177284f4b04b6df2077b2d1af3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.346 10:33:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:11.346 [ 1]:0x2 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.346 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:11.606 [ 0]:0x2 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.606 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 66259042-a38d-4450-b00d-a95dccec7c34 -a 10.0.0.2 -s 4420 -i 4 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:14:11.864 10:33:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:13.772 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.031 [ 0]:0x1 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=efd1a177284f4b04b6df2077b2d1af3a 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ efd1a177284f4b04b6df2077b2d1af3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:14.031 [ 1]:0x2 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.031 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.289 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:14.290 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.290 10:33:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.290 [ 0]:0x2 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.290 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:14.548 [2024-05-15 10:33:30.323320] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:14.548 request: 00:14:14.548 { 00:14:14.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.548 "nsid": 2, 00:14:14.548 "host": "nqn.2016-06.io.spdk:host1", 00:14:14.548 "method": "nvmf_ns_remove_host", 00:14:14.548 "req_id": 1 00:14:14.548 } 00:14:14.548 Got JSON-RPC error response 00:14:14.548 response: 00:14:14.548 { 00:14:14.548 "code": -32602, 00:14:14.548 "message": "Invalid parameters" 00:14:14.548 } 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:14.548 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.549 [ 0]:0x2 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.549 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=13faf41bf28845879d15786143df8f69 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 13faf41bf28845879d15786143df8f69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.807 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.807 rmmod nvme_tcp 00:14:14.807 rmmod nvme_fabrics 00:14:14.807 rmmod nvme_keyring 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2622384 ']' 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2622384 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 2622384 ']' 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 2622384 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2622384 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:15.065 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2622384' 00:14:15.066 killing process with pid 2622384 00:14:15.066 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 2622384 00:14:15.066 [2024-05-15 10:33:30.729285] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:15.066 10:33:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 2622384 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.636 10:33:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.542 10:33:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.542 00:14:17.542 real 0m18.724s 00:14:17.542 user 0m48.541s 00:14:17.542 sys 0m4.925s 00:14:17.542 10:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:17.542 10:33:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:17.542 ************************************ 00:14:17.542 END TEST nvmf_ns_masking 00:14:17.542 ************************************ 00:14:17.542 10:33:33 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:14:17.542 10:33:33 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:14:17.542 10:33:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:17.542 10:33:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:17.542 10:33:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:17.542 10:33:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.802 ************************************ 00:14:17.802 START TEST nvmf_host_management 00:14:17.802 ************************************ 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:17.802 * Looking for test storage... 00:14:17.802 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.802 10:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.803 10:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:23.116 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:23.116 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:23.116 Found net devices under 0000:27:00.0: cvl_0_0 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:23.116 Found net devices under 0000:27:00.1: cvl_0_1 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.116 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:14:23.117 00:14:23.117 --- 10.0.0.2 ping statistics --- 00:14:23.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.117 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:14:23.117 00:14:23.117 --- 10.0.0.1 ping statistics --- 00:14:23.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.117 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2628502 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2628502 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2628502 ']' 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.117 10:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:23.117 [2024-05-15 10:33:38.870267] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:14:23.117 [2024-05-15 10:33:38.870368] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.117 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.376 [2024-05-15 10:33:38.990273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.376 [2024-05-15 10:33:39.084084] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.376 [2024-05-15 10:33:39.084120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.376 [2024-05-15 10:33:39.084129] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.376 [2024-05-15 10:33:39.084138] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.376 [2024-05-15 10:33:39.084145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.376 [2024-05-15 10:33:39.084282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.376 [2024-05-15 10:33:39.084392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.377 [2024-05-15 10:33:39.084433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.377 [2024-05-15 10:33:39.084461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:23.945 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:23.945 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [2024-05-15 10:33:39.628203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 Malloc0 00:14:23.946 [2024-05-15 10:33:39.705559] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.946 [2024-05-15 10:33:39.705862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2628717 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2628717 /var/tmp/bdevperf.sock 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2628717 ']' 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:23.946 { 00:14:23.946 "params": { 00:14:23.946 "name": "Nvme$subsystem", 00:14:23.946 "trtype": "$TEST_TRANSPORT", 00:14:23.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:23.946 "adrfam": "ipv4", 00:14:23.946 "trsvcid": "$NVMF_PORT", 00:14:23.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:23.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:23.946 "hdgst": ${hdgst:-false}, 00:14:23.946 "ddgst": ${ddgst:-false} 00:14:23.946 }, 00:14:23.946 "method": "bdev_nvme_attach_controller" 00:14:23.946 } 00:14:23.946 EOF 00:14:23.946 )") 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:23.946 10:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:23.946 "params": { 00:14:23.946 "name": "Nvme0", 00:14:23.946 "trtype": "tcp", 00:14:23.946 "traddr": "10.0.0.2", 00:14:23.946 "adrfam": "ipv4", 00:14:23.946 "trsvcid": "4420", 00:14:23.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:23.946 "hdgst": false, 00:14:23.946 "ddgst": false 00:14:23.946 }, 00:14:23.946 "method": "bdev_nvme_attach_controller" 00:14:23.946 }' 00:14:24.206 [2024-05-15 10:33:39.838317] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:14:24.206 [2024-05-15 10:33:39.838460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628717 ] 00:14:24.206 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.206 [2024-05-15 10:33:39.966235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.206 [2024-05-15 10:33:40.076673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.774 Running I/O for 10 seconds... 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.774 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.775 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:24.775 [2024-05-15 10:33:40.605335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.605993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:24.775 [2024-05-15 10:33:40.606279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.775 [2024-05-15 10:33:40.606477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.775 [2024-05-15 10:33:40.606485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.606986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.606994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.776 [2024-05-15 10:33:40.607185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.776 [2024-05-15 10:33:40.607192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:24.777 [2024-05-15 10:33:40.607471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.777 [2024-05-15 10:33:40.607481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a4100 is same with the state(5) to be set 00:14:24.777 [2024-05-15 10:33:40.607610] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a4100 was disconnected and freed. reset controller. 00:14:24.777 [2024-05-15 10:33:40.608532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:24.777 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.777 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:24.777 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.777 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:24.777 task offset: 57344 on job bdev=Nvme0n1 fails 00:14:24.777 00:14:24.777 Latency(us) 00:14:24.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.777 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:24.777 Job: Nvme0n1 ended in about 0.27 seconds with error 00:14:24.777 Verification LBA range: start 0x0 length 0x400 00:14:24.777 Nvme0n1 : 0.27 1687.41 105.46 241.06 0.00 32152.22 5484.33 29525.69 00:14:24.777 =================================================================================================================== 00:14:24.777 Total : 1687.41 105.46 241.06 0.00 32152.22 5484.33 29525.69 00:14:24.777 [2024-05-15 10:33:40.611097] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:24.777 [2024-05-15 10:33:40.611131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:14:24.777 10:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.777 10:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:25.036 [2024-05-15 10:33:40.660473] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2628717 00:14:25.972 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2628717) - No such process 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:25.972 { 00:14:25.972 "params": { 00:14:25.972 "name": "Nvme$subsystem", 00:14:25.972 "trtype": "$TEST_TRANSPORT", 00:14:25.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:25.972 "adrfam": "ipv4", 00:14:25.972 "trsvcid": "$NVMF_PORT", 00:14:25.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:25.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:25.972 "hdgst": ${hdgst:-false}, 00:14:25.972 "ddgst": ${ddgst:-false} 00:14:25.972 }, 00:14:25.972 "method": "bdev_nvme_attach_controller" 00:14:25.972 } 00:14:25.972 EOF 00:14:25.972 )") 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:25.972 10:33:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:25.972 "params": { 00:14:25.972 "name": "Nvme0", 00:14:25.972 "trtype": "tcp", 00:14:25.972 "traddr": "10.0.0.2", 00:14:25.972 "adrfam": "ipv4", 00:14:25.972 "trsvcid": "4420", 00:14:25.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:25.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:25.972 "hdgst": false, 00:14:25.972 "ddgst": false 00:14:25.972 }, 00:14:25.972 "method": "bdev_nvme_attach_controller" 00:14:25.972 }' 00:14:25.972 [2024-05-15 10:33:41.711679] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:14:25.972 [2024-05-15 10:33:41.711825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629148 ] 00:14:25.972 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.972 [2024-05-15 10:33:41.840584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.230 [2024-05-15 10:33:41.930844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.487 Running I/O for 1 seconds... 00:14:27.867 00:14:27.867 Latency(us) 00:14:27.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.867 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:27.867 Verification LBA range: start 0x0 length 0x400 00:14:27.867 Nvme0n1 : 1.03 2180.79 136.30 0.00 0.00 28937.03 6450.12 24006.87 00:14:27.867 =================================================================================================================== 00:14:27.867 Total : 2180.79 136.30 0.00 0.00 28937.03 6450.12 24006.87 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.867 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.867 rmmod nvme_tcp 00:14:28.127 rmmod nvme_fabrics 00:14:28.127 rmmod nvme_keyring 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2628502 ']' 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2628502 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 2628502 ']' 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 2628502 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2628502 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2628502' 00:14:28.127 killing process with pid 2628502 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 2628502 00:14:28.127 [2024-05-15 10:33:43.826005] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:28.127 10:33:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 2628502 00:14:28.694 [2024-05-15 10:33:44.284881] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.694 10:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.599 10:33:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.599 10:33:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:30.599 00:14:30.599 real 0m12.993s 00:14:30.599 user 0m25.046s 00:14:30.599 sys 0m4.931s 00:14:30.599 10:33:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:30.599 10:33:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.599 ************************************ 00:14:30.599 END TEST nvmf_host_management 00:14:30.599 ************************************ 00:14:30.599 10:33:46 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:30.599 10:33:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:30.599 10:33:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:30.599 10:33:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.858 ************************************ 00:14:30.858 START TEST nvmf_lvol 00:14:30.858 ************************************ 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:30.858 * Looking for test storage... 00:14:30.858 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.858 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:30.859 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.859 10:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.859 10:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:36.128 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:36.128 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.128 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:36.129 Found net devices under 0000:27:00.0: cvl_0_0 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:36.129 Found net devices under 0000:27:00.1: cvl_0_1 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:14:36.129 00:14:36.129 --- 10.0.0.2 ping statistics --- 00:14:36.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.129 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:14:36.129 00:14:36.129 --- 10.0.0.1 ping statistics --- 00:14:36.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.129 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.129 10:33:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2633363 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2633363 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 2633363 ']' 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:36.387 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 [2024-05-15 10:33:52.080393] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:14:36.387 [2024-05-15 10:33:52.080493] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.387 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.387 [2024-05-15 10:33:52.198739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:36.646 [2024-05-15 10:33:52.294300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.646 [2024-05-15 10:33:52.294338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.646 [2024-05-15 10:33:52.294350] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.646 [2024-05-15 10:33:52.294363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.646 [2024-05-15 10:33:52.294370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.646 [2024-05-15 10:33:52.294446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.646 [2024-05-15 10:33:52.294541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.646 [2024-05-15 10:33:52.294548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.218 10:33:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:37.218 [2024-05-15 10:33:52.967024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.218 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:37.476 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:37.476 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:37.476 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:37.476 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:37.734 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:37.992 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d1bf9984-88e1-4fc5-bd57-777609e92103 00:14:37.992 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d1bf9984-88e1-4fc5-bd57-777609e92103 lvol 20 00:14:37.992 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0a5e428c-f57e-4753-acba-a5279574cf99 00:14:37.992 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:38.250 10:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a5e428c-f57e-4753-acba-a5279574cf99 00:14:38.250 10:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:38.511 [2024-05-15 10:33:54.213761] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:38.511 [2024-05-15 10:33:54.214053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.511 10:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.771 10:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2633951 00:14:38.771 10:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:38.771 10:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:38.771 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.707 10:33:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0a5e428c-f57e-4753-acba-a5279574cf99 MY_SNAPSHOT 00:14:39.707 10:33:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c45082ec-b5c8-4047-a297-74307a5ba085 00:14:39.707 10:33:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0a5e428c-f57e-4753-acba-a5279574cf99 30 00:14:40.003 10:33:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c45082ec-b5c8-4047-a297-74307a5ba085 MY_CLONE 00:14:40.263 10:33:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7178b0f1-bb4e-4c03-8041-0f928060dfc1 00:14:40.263 10:33:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7178b0f1-bb4e-4c03-8041-0f928060dfc1 00:14:40.523 10:33:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2633951 00:14:50.501 Initializing NVMe Controllers 00:14:50.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:50.501 Controller IO queue size 128, less than required. 00:14:50.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:50.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:50.501 Initialization complete. Launching workers. 00:14:50.501 ======================================================== 00:14:50.501 Latency(us) 00:14:50.501 Device Information : IOPS MiB/s Average min max 00:14:50.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14783.30 57.75 8659.77 290.74 97355.87 00:14:50.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14688.10 57.38 8719.08 2699.79 64909.84 00:14:50.501 ======================================================== 00:14:50.501 Total : 29471.39 115.12 8689.33 290.74 97355.87 00:14:50.501 00:14:50.501 10:34:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a5e428c-f57e-4753-acba-a5279574cf99 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1bf9984-88e1-4fc5-bd57-777609e92103 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.501 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.502 rmmod nvme_tcp 00:14:50.502 rmmod nvme_fabrics 00:14:50.502 rmmod nvme_keyring 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2633363 ']' 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2633363 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 2633363 ']' 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 2633363 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2633363 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2633363' 00:14:50.502 killing process with pid 2633363 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 2633363 00:14:50.502 [2024-05-15 10:34:05.457065] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:50.502 10:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 2633363 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.502 10:34:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.413 10:34:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.413 00:14:52.413 real 0m21.633s 00:14:52.413 user 1m3.156s 00:14:52.414 sys 0m6.348s 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 ************************************ 00:14:52.414 END TEST nvmf_lvol 00:14:52.414 ************************************ 00:14:52.414 10:34:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.414 10:34:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:52.414 10:34:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:52.414 10:34:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 ************************************ 00:14:52.414 START TEST nvmf_lvs_grow 00:14:52.414 ************************************ 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.414 * Looking for test storage... 00:14:52.414 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.414 10:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.675 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:52.675 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.675 10:34:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.675 10:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:59.254 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:59.254 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:59.254 Found net devices under 0000:27:00.0: cvl_0_0 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:59.254 Found net devices under 0000:27:00.1: cvl_0_1 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.254 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.746 ms 00:14:59.255 00:14:59.255 --- 10.0.0.2 ping statistics --- 00:14:59.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.255 rtt min/avg/max/mdev = 0.746/0.746/0.746/0.000 ms 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:14:59.255 00:14:59.255 --- 10.0.0.1 ping statistics --- 00:14:59.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.255 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.255 10:34:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2640203 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2640203 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 2640203 ']' 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:59.255 10:34:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:59.255 [2024-05-15 10:34:15.108719] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:14:59.255 [2024-05-15 10:34:15.108848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.514 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.514 [2024-05-15 10:34:15.253830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.514 [2024-05-15 10:34:15.351451] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.514 [2024-05-15 10:34:15.351508] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.514 [2024-05-15 10:34:15.351518] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.514 [2024-05-15 10:34:15.351529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.514 [2024-05-15 10:34:15.351536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.514 [2024-05-15 10:34:15.351581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.086 10:34:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.346 [2024-05-15 10:34:15.995930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.346 10:34:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:00.346 10:34:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:00.346 10:34:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:00.346 10:34:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:00.346 ************************************ 00:15:00.346 START TEST lvs_grow_clean 00:15:00.347 ************************************ 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:00.347 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:00.607 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:00.607 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:00.607 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:00.607 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:00.607 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:00.867 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:00.867 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:00.867 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead lvol 150 00:15:00.867 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b61de700-f4ff-46bc-a499-6635cbc5caa4 00:15:00.867 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:00.867 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:01.127 [2024-05-15 10:34:16.795839] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:01.127 [2024-05-15 10:34:16.795924] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:01.127 true 00:15:01.127 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:01.127 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:01.127 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:01.127 10:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:01.388 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b61de700-f4ff-46bc-a499-6635cbc5caa4 00:15:01.388 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:01.649 [2024-05-15 10:34:17.320000] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:01.649 [2024-05-15 10:34:17.320377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2640818 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2640818 /var/tmp/bdevperf.sock 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 2640818 ']' 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:01.649 10:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:01.649 [2024-05-15 10:34:17.519265] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:01.649 [2024-05-15 10:34:17.519354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640818 ] 00:15:01.909 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.909 [2024-05-15 10:34:17.608538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.910 [2024-05-15 10:34:17.700427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.479 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:02.479 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:15:02.479 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:02.739 Nvme0n1 00:15:02.739 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:03.000 [ 00:15:03.000 { 00:15:03.000 "name": "Nvme0n1", 00:15:03.000 "aliases": [ 00:15:03.000 "b61de700-f4ff-46bc-a499-6635cbc5caa4" 00:15:03.000 ], 00:15:03.000 "product_name": "NVMe disk", 00:15:03.000 "block_size": 4096, 00:15:03.000 "num_blocks": 38912, 00:15:03.000 "uuid": "b61de700-f4ff-46bc-a499-6635cbc5caa4", 00:15:03.000 "assigned_rate_limits": { 00:15:03.000 "rw_ios_per_sec": 0, 00:15:03.000 "rw_mbytes_per_sec": 0, 00:15:03.000 "r_mbytes_per_sec": 0, 00:15:03.000 "w_mbytes_per_sec": 0 00:15:03.000 }, 00:15:03.000 "claimed": false, 00:15:03.000 "zoned": false, 00:15:03.000 "supported_io_types": { 00:15:03.000 "read": true, 00:15:03.000 "write": true, 00:15:03.000 "unmap": true, 00:15:03.000 "write_zeroes": true, 00:15:03.000 "flush": true, 00:15:03.000 "reset": true, 00:15:03.000 "compare": true, 00:15:03.000 "compare_and_write": true, 00:15:03.000 "abort": true, 00:15:03.000 "nvme_admin": true, 00:15:03.000 "nvme_io": true 00:15:03.000 }, 00:15:03.000 "memory_domains": [ 00:15:03.000 { 00:15:03.000 "dma_device_id": "system", 00:15:03.000 "dma_device_type": 1 00:15:03.000 } 00:15:03.000 ], 00:15:03.000 "driver_specific": { 00:15:03.000 "nvme": [ 00:15:03.000 { 00:15:03.000 "trid": { 00:15:03.000 "trtype": "TCP", 00:15:03.000 "adrfam": "IPv4", 00:15:03.000 "traddr": "10.0.0.2", 00:15:03.000 "trsvcid": "4420", 00:15:03.000 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:03.000 }, 00:15:03.000 "ctrlr_data": { 00:15:03.000 "cntlid": 1, 00:15:03.000 "vendor_id": "0x8086", 00:15:03.000 "model_number": "SPDK bdev Controller", 00:15:03.000 "serial_number": "SPDK0", 00:15:03.000 "firmware_revision": "24.05", 00:15:03.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:03.000 "oacs": { 00:15:03.000 "security": 0, 00:15:03.000 "format": 0, 00:15:03.000 "firmware": 0, 00:15:03.000 "ns_manage": 0 00:15:03.000 }, 00:15:03.000 "multi_ctrlr": true, 00:15:03.000 "ana_reporting": false 00:15:03.000 }, 00:15:03.000 "vs": { 00:15:03.000 "nvme_version": "1.3" 00:15:03.000 }, 00:15:03.000 "ns_data": { 00:15:03.000 "id": 1, 00:15:03.000 "can_share": true 00:15:03.000 } 00:15:03.000 } 00:15:03.000 ], 00:15:03.000 "mp_policy": "active_passive" 00:15:03.000 } 00:15:03.000 } 00:15:03.000 ] 00:15:03.000 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2641040 00:15:03.000 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:03.000 10:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.000 Running I/O for 10 seconds... 00:15:03.935 Latency(us) 00:15:03.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.935 Nvme0n1 : 1.00 23617.00 92.25 0.00 0.00 0.00 0.00 0.00 00:15:03.935 =================================================================================================================== 00:15:03.935 Total : 23617.00 92.25 0.00 0.00 0.00 0.00 0.00 00:15:03.935 00:15:04.871 10:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:04.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.871 Nvme0n1 : 2.00 23451.50 91.61 0.00 0.00 0.00 0.00 0.00 00:15:04.871 =================================================================================================================== 00:15:04.871 Total : 23451.50 91.61 0.00 0.00 0.00 0.00 0.00 00:15:04.871 00:15:05.130 true 00:15:05.130 10:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:05.130 10:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:05.130 10:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:05.130 10:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:05.130 10:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2641040 00:15:06.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.103 Nvme0n1 : 3.00 23490.00 91.76 0.00 0.00 0.00 0.00 0.00 00:15:06.103 =================================================================================================================== 00:15:06.103 Total : 23490.00 91.76 0.00 0.00 0.00 0.00 0.00 00:15:06.103 00:15:07.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.037 Nvme0n1 : 4.00 23564.75 92.05 0.00 0.00 0.00 0.00 0.00 00:15:07.037 =================================================================================================================== 00:15:07.038 Total : 23564.75 92.05 0.00 0.00 0.00 0.00 0.00 00:15:07.038 00:15:07.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.973 Nvme0n1 : 5.00 23577.40 92.10 0.00 0.00 0.00 0.00 0.00 00:15:07.973 =================================================================================================================== 00:15:07.973 Total : 23577.40 92.10 0.00 0.00 0.00 0.00 0.00 00:15:07.973 00:15:08.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.905 Nvme0n1 : 6.00 23627.17 92.29 0.00 0.00 0.00 0.00 0.00 00:15:08.905 =================================================================================================================== 00:15:08.905 Total : 23627.17 92.29 0.00 0.00 0.00 0.00 0.00 00:15:08.905 00:15:09.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.841 Nvme0n1 : 7.00 23615.43 92.25 0.00 0.00 0.00 0.00 0.00 00:15:09.841 =================================================================================================================== 00:15:09.841 Total : 23615.43 92.25 0.00 0.00 0.00 0.00 0.00 00:15:09.841 00:15:11.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.218 Nvme0n1 : 8.00 23632.12 92.31 0.00 0.00 0.00 0.00 0.00 00:15:11.218 =================================================================================================================== 00:15:11.218 Total : 23632.12 92.31 0.00 0.00 0.00 0.00 0.00 00:15:11.218 00:15:12.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.151 Nvme0n1 : 9.00 23659.22 92.42 0.00 0.00 0.00 0.00 0.00 00:15:12.151 =================================================================================================================== 00:15:12.151 Total : 23659.22 92.42 0.00 0.00 0.00 0.00 0.00 00:15:12.151 00:15:13.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.091 Nvme0n1 : 10.00 23659.40 92.42 0.00 0.00 0.00 0.00 0.00 00:15:13.091 =================================================================================================================== 00:15:13.091 Total : 23659.40 92.42 0.00 0.00 0.00 0.00 0.00 00:15:13.091 00:15:13.091 00:15:13.091 Latency(us) 00:15:13.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.091 Nvme0n1 : 10.01 23660.20 92.42 0.00 0.00 5406.07 1724.63 12210.39 00:15:13.091 =================================================================================================================== 00:15:13.091 Total : 23660.20 92.42 0.00 0.00 5406.07 1724.63 12210.39 00:15:13.091 0 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2640818 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 2640818 ']' 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 2640818 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2640818 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2640818' 00:15:13.091 killing process with pid 2640818 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 2640818 00:15:13.091 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.091 00:15:13.091 Latency(us) 00:15:13.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.091 =================================================================================================================== 00:15:13.091 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.091 10:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 2640818 00:15:13.349 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.608 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:13.608 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:13.608 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:13.867 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:13.867 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:13.867 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.867 [2024-05-15 10:34:29.691655] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:14.130 request: 00:15:14.130 { 00:15:14.130 "uuid": "f13c8ded-f0bd-4a05-98ac-0083ebf03ead", 00:15:14.130 "method": "bdev_lvol_get_lvstores", 00:15:14.130 "req_id": 1 00:15:14.130 } 00:15:14.130 Got JSON-RPC error response 00:15:14.130 response: 00:15:14.130 { 00:15:14.130 "code": -19, 00:15:14.130 "message": "No such device" 00:15:14.130 } 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:14.130 10:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:14.391 aio_bdev 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b61de700-f4ff-46bc-a499-6635cbc5caa4 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=b61de700-f4ff-46bc-a499-6635cbc5caa4 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:14.392 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b61de700-f4ff-46bc-a499-6635cbc5caa4 -t 2000 00:15:14.650 [ 00:15:14.650 { 00:15:14.650 "name": "b61de700-f4ff-46bc-a499-6635cbc5caa4", 00:15:14.650 "aliases": [ 00:15:14.650 "lvs/lvol" 00:15:14.650 ], 00:15:14.650 "product_name": "Logical Volume", 00:15:14.650 "block_size": 4096, 00:15:14.650 "num_blocks": 38912, 00:15:14.650 "uuid": "b61de700-f4ff-46bc-a499-6635cbc5caa4", 00:15:14.650 "assigned_rate_limits": { 00:15:14.650 "rw_ios_per_sec": 0, 00:15:14.650 "rw_mbytes_per_sec": 0, 00:15:14.650 "r_mbytes_per_sec": 0, 00:15:14.650 "w_mbytes_per_sec": 0 00:15:14.650 }, 00:15:14.650 "claimed": false, 00:15:14.650 "zoned": false, 00:15:14.650 "supported_io_types": { 00:15:14.650 "read": true, 00:15:14.650 "write": true, 00:15:14.650 "unmap": true, 00:15:14.650 "write_zeroes": true, 00:15:14.650 "flush": false, 00:15:14.650 "reset": true, 00:15:14.650 "compare": false, 00:15:14.650 "compare_and_write": false, 00:15:14.650 "abort": false, 00:15:14.651 "nvme_admin": false, 00:15:14.651 "nvme_io": false 00:15:14.651 }, 00:15:14.651 "driver_specific": { 00:15:14.651 "lvol": { 00:15:14.651 "lvol_store_uuid": "f13c8ded-f0bd-4a05-98ac-0083ebf03ead", 00:15:14.651 "base_bdev": "aio_bdev", 00:15:14.651 "thin_provision": false, 00:15:14.651 "num_allocated_clusters": 38, 00:15:14.651 "snapshot": false, 00:15:14.651 "clone": false, 00:15:14.651 "esnap_clone": false 00:15:14.651 } 00:15:14.651 } 00:15:14.651 } 00:15:14.651 ] 00:15:14.651 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:15:14.651 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:14.651 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:14.651 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:14.651 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:14.651 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:14.908 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:14.908 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b61de700-f4ff-46bc-a499-6635cbc5caa4 00:15:14.908 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f13c8ded-f0bd-4a05-98ac-0083ebf03ead 00:15:15.167 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:15.167 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.167 00:15:15.167 real 0m14.941s 00:15:15.167 user 0m14.521s 00:15:15.167 sys 0m1.174s 00:15:15.167 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:15.167 10:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:15.167 ************************************ 00:15:15.167 END TEST lvs_grow_clean 00:15:15.167 ************************************ 00:15:15.167 10:34:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:15.167 10:34:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:15.167 10:34:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:15.167 10:34:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:15.427 ************************************ 00:15:15.427 START TEST lvs_grow_dirty 00:15:15.427 ************************************ 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:15.427 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:15.686 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:15.686 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:15.686 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:15.686 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:15.686 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:15.686 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 lvol 150 00:15:15.947 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:15.947 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.947 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:15.947 [2024-05-15 10:34:31.804177] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:15.947 [2024-05-15 10:34:31.804261] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:15.947 true 00:15:15.947 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:15.947 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:16.206 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:16.206 10:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:16.463 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:16.463 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:16.463 [2024-05-15 10:34:32.324547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2643558 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2643558 /var/tmp/bdevperf.sock 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2643558 ']' 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:16.723 10:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:16.723 [2024-05-15 10:34:32.528959] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:16.723 [2024-05-15 10:34:32.529087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2643558 ] 00:15:16.982 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.982 [2024-05-15 10:34:32.640618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.982 [2024-05-15 10:34:32.731795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.550 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:17.550 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:15:17.550 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:17.807 Nvme0n1 00:15:17.807 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:18.064 [ 00:15:18.064 { 00:15:18.064 "name": "Nvme0n1", 00:15:18.064 "aliases": [ 00:15:18.064 "a50082b4-d01b-4b87-ac64-acffc512d5e9" 00:15:18.064 ], 00:15:18.064 "product_name": "NVMe disk", 00:15:18.064 "block_size": 4096, 00:15:18.064 "num_blocks": 38912, 00:15:18.064 "uuid": "a50082b4-d01b-4b87-ac64-acffc512d5e9", 00:15:18.064 "assigned_rate_limits": { 00:15:18.064 "rw_ios_per_sec": 0, 00:15:18.064 "rw_mbytes_per_sec": 0, 00:15:18.064 "r_mbytes_per_sec": 0, 00:15:18.064 "w_mbytes_per_sec": 0 00:15:18.064 }, 00:15:18.064 "claimed": false, 00:15:18.064 "zoned": false, 00:15:18.064 "supported_io_types": { 00:15:18.064 "read": true, 00:15:18.064 "write": true, 00:15:18.064 "unmap": true, 00:15:18.064 "write_zeroes": true, 00:15:18.064 "flush": true, 00:15:18.064 "reset": true, 00:15:18.064 "compare": true, 00:15:18.064 "compare_and_write": true, 00:15:18.064 "abort": true, 00:15:18.064 "nvme_admin": true, 00:15:18.064 "nvme_io": true 00:15:18.064 }, 00:15:18.064 "memory_domains": [ 00:15:18.064 { 00:15:18.064 "dma_device_id": "system", 00:15:18.064 "dma_device_type": 1 00:15:18.064 } 00:15:18.064 ], 00:15:18.064 "driver_specific": { 00:15:18.064 "nvme": [ 00:15:18.064 { 00:15:18.064 "trid": { 00:15:18.064 "trtype": "TCP", 00:15:18.064 "adrfam": "IPv4", 00:15:18.064 "traddr": "10.0.0.2", 00:15:18.064 "trsvcid": "4420", 00:15:18.064 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:18.064 }, 00:15:18.064 "ctrlr_data": { 00:15:18.064 "cntlid": 1, 00:15:18.064 "vendor_id": "0x8086", 00:15:18.064 "model_number": "SPDK bdev Controller", 00:15:18.064 "serial_number": "SPDK0", 00:15:18.064 "firmware_revision": "24.05", 00:15:18.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:18.064 "oacs": { 00:15:18.064 "security": 0, 00:15:18.064 "format": 0, 00:15:18.064 "firmware": 0, 00:15:18.064 "ns_manage": 0 00:15:18.064 }, 00:15:18.064 "multi_ctrlr": true, 00:15:18.064 "ana_reporting": false 00:15:18.064 }, 00:15:18.064 "vs": { 00:15:18.064 "nvme_version": "1.3" 00:15:18.064 }, 00:15:18.064 "ns_data": { 00:15:18.064 "id": 1, 00:15:18.064 "can_share": true 00:15:18.064 } 00:15:18.064 } 00:15:18.064 ], 00:15:18.064 "mp_policy": "active_passive" 00:15:18.064 } 00:15:18.064 } 00:15:18.064 ] 00:15:18.064 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2643861 00:15:18.064 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:18.064 10:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.064 Running I/O for 10 seconds... 00:15:19.000 Latency(us) 00:15:19.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.000 Nvme0n1 : 1.00 23345.00 91.19 0.00 0.00 0.00 0.00 0.00 00:15:19.000 =================================================================================================================== 00:15:19.000 Total : 23345.00 91.19 0.00 0.00 0.00 0.00 0.00 00:15:19.000 00:15:19.934 10:34:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:19.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.934 Nvme0n1 : 2.00 23420.00 91.48 0.00 0.00 0.00 0.00 0.00 00:15:19.934 =================================================================================================================== 00:15:19.934 Total : 23420.00 91.48 0.00 0.00 0.00 0.00 0.00 00:15:19.934 00:15:20.194 true 00:15:20.194 10:34:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:20.194 10:34:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:20.194 10:34:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:20.194 10:34:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:20.195 10:34:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2643861 00:15:21.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.133 Nvme0n1 : 3.00 23426.00 91.51 0.00 0.00 0.00 0.00 0.00 00:15:21.133 =================================================================================================================== 00:15:21.133 Total : 23426.00 91.51 0.00 0.00 0.00 0.00 0.00 00:15:21.133 00:15:22.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.109 Nvme0n1 : 4.00 23448.50 91.60 0.00 0.00 0.00 0.00 0.00 00:15:22.109 =================================================================================================================== 00:15:22.109 Total : 23448.50 91.60 0.00 0.00 0.00 0.00 0.00 00:15:22.109 00:15:23.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.048 Nvme0n1 : 5.00 23458.20 91.63 0.00 0.00 0.00 0.00 0.00 00:15:23.048 =================================================================================================================== 00:15:23.048 Total : 23458.20 91.63 0.00 0.00 0.00 0.00 0.00 00:15:23.048 00:15:23.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.985 Nvme0n1 : 6.00 23496.33 91.78 0.00 0.00 0.00 0.00 0.00 00:15:23.985 =================================================================================================================== 00:15:23.985 Total : 23496.33 91.78 0.00 0.00 0.00 0.00 0.00 00:15:23.985 00:15:24.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.923 Nvme0n1 : 7.00 23451.57 91.61 0.00 0.00 0.00 0.00 0.00 00:15:24.923 =================================================================================================================== 00:15:24.923 Total : 23451.57 91.61 0.00 0.00 0.00 0.00 0.00 00:15:24.923 00:15:26.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.302 Nvme0n1 : 8.00 23451.38 91.61 0.00 0.00 0.00 0.00 0.00 00:15:26.302 =================================================================================================================== 00:15:26.302 Total : 23451.38 91.61 0.00 0.00 0.00 0.00 0.00 00:15:26.302 00:15:27.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.239 Nvme0n1 : 9.00 23455.22 91.62 0.00 0.00 0.00 0.00 0.00 00:15:27.239 =================================================================================================================== 00:15:27.239 Total : 23455.22 91.62 0.00 0.00 0.00 0.00 0.00 00:15:27.239 00:15:28.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.171 Nvme0n1 : 10.00 23470.50 91.68 0.00 0.00 0.00 0.00 0.00 00:15:28.171 =================================================================================================================== 00:15:28.171 Total : 23470.50 91.68 0.00 0.00 0.00 0.00 0.00 00:15:28.171 00:15:28.171 00:15:28.171 Latency(us) 00:15:28.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.171 Nvme0n1 : 10.01 23469.76 91.68 0.00 0.00 5450.53 2060.93 11865.47 00:15:28.171 =================================================================================================================== 00:15:28.171 Total : 23469.76 91.68 0.00 0.00 5450.53 2060.93 11865.47 00:15:28.171 0 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2643558 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 2643558 ']' 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 2643558 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2643558 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2643558' 00:15:28.172 killing process with pid 2643558 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 2643558 00:15:28.172 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.172 00:15:28.172 Latency(us) 00:15:28.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.172 =================================================================================================================== 00:15:28.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.172 10:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 2643558 00:15:28.431 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:28.691 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:28.691 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:28.691 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2640203 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2640203 00:15:28.951 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2640203 Killed "${NVMF_APP[@]}" "$@" 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2645936 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2645936 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2645936 ']' 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:28.951 10:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:28.951 [2024-05-15 10:34:44.765240] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:28.951 [2024-05-15 10:34:44.765359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.210 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.210 [2024-05-15 10:34:44.895349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.210 [2024-05-15 10:34:44.994061] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.210 [2024-05-15 10:34:44.994107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.210 [2024-05-15 10:34:44.994117] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.210 [2024-05-15 10:34:44.994127] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.210 [2024-05-15 10:34:44.994134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.210 [2024-05-15 10:34:44.994163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.780 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:29.780 [2024-05-15 10:34:45.637214] blobstore.c:4859:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:29.780 [2024-05-15 10:34:45.637346] blobstore.c:4806:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:29.780 [2024-05-15 10:34:45.637374] blobstore.c:4806:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:30.040 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a50082b4-d01b-4b87-ac64-acffc512d5e9 -t 2000 00:15:30.301 [ 00:15:30.301 { 00:15:30.301 "name": "a50082b4-d01b-4b87-ac64-acffc512d5e9", 00:15:30.301 "aliases": [ 00:15:30.301 "lvs/lvol" 00:15:30.301 ], 00:15:30.301 "product_name": "Logical Volume", 00:15:30.301 "block_size": 4096, 00:15:30.301 "num_blocks": 38912, 00:15:30.301 "uuid": "a50082b4-d01b-4b87-ac64-acffc512d5e9", 00:15:30.301 "assigned_rate_limits": { 00:15:30.301 "rw_ios_per_sec": 0, 00:15:30.301 "rw_mbytes_per_sec": 0, 00:15:30.301 "r_mbytes_per_sec": 0, 00:15:30.301 "w_mbytes_per_sec": 0 00:15:30.301 }, 00:15:30.301 "claimed": false, 00:15:30.301 "zoned": false, 00:15:30.301 "supported_io_types": { 00:15:30.301 "read": true, 00:15:30.301 "write": true, 00:15:30.301 "unmap": true, 00:15:30.301 "write_zeroes": true, 00:15:30.301 "flush": false, 00:15:30.301 "reset": true, 00:15:30.301 "compare": false, 00:15:30.301 "compare_and_write": false, 00:15:30.301 "abort": false, 00:15:30.301 "nvme_admin": false, 00:15:30.301 "nvme_io": false 00:15:30.301 }, 00:15:30.301 "driver_specific": { 00:15:30.301 "lvol": { 00:15:30.301 "lvol_store_uuid": "1b3ec7fe-94f8-45cf-a1e1-762b7917cd02", 00:15:30.301 "base_bdev": "aio_bdev", 00:15:30.301 "thin_provision": false, 00:15:30.301 "num_allocated_clusters": 38, 00:15:30.301 "snapshot": false, 00:15:30.301 "clone": false, 00:15:30.301 "esnap_clone": false 00:15:30.301 } 00:15:30.301 } 00:15:30.301 } 00:15:30.301 ] 00:15:30.301 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:15:30.301 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:30.301 10:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:30.301 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:30.301 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:30.301 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:30.562 [2024-05-15 10:34:46.367496] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:30.562 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:15:30.563 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:30.824 request: 00:15:30.824 { 00:15:30.824 "uuid": "1b3ec7fe-94f8-45cf-a1e1-762b7917cd02", 00:15:30.824 "method": "bdev_lvol_get_lvstores", 00:15:30.824 "req_id": 1 00:15:30.824 } 00:15:30.824 Got JSON-RPC error response 00:15:30.824 response: 00:15:30.824 { 00:15:30.824 "code": -19, 00:15:30.824 "message": "No such device" 00:15:30.824 } 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:30.824 aio_bdev 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:15:30.824 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:31.085 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a50082b4-d01b-4b87-ac64-acffc512d5e9 -t 2000 00:15:31.085 [ 00:15:31.085 { 00:15:31.085 "name": "a50082b4-d01b-4b87-ac64-acffc512d5e9", 00:15:31.085 "aliases": [ 00:15:31.085 "lvs/lvol" 00:15:31.085 ], 00:15:31.085 "product_name": "Logical Volume", 00:15:31.085 "block_size": 4096, 00:15:31.085 "num_blocks": 38912, 00:15:31.085 "uuid": "a50082b4-d01b-4b87-ac64-acffc512d5e9", 00:15:31.085 "assigned_rate_limits": { 00:15:31.085 "rw_ios_per_sec": 0, 00:15:31.085 "rw_mbytes_per_sec": 0, 00:15:31.085 "r_mbytes_per_sec": 0, 00:15:31.085 "w_mbytes_per_sec": 0 00:15:31.085 }, 00:15:31.085 "claimed": false, 00:15:31.085 "zoned": false, 00:15:31.085 "supported_io_types": { 00:15:31.085 "read": true, 00:15:31.085 "write": true, 00:15:31.085 "unmap": true, 00:15:31.085 "write_zeroes": true, 00:15:31.085 "flush": false, 00:15:31.085 "reset": true, 00:15:31.085 "compare": false, 00:15:31.085 "compare_and_write": false, 00:15:31.085 "abort": false, 00:15:31.085 "nvme_admin": false, 00:15:31.085 "nvme_io": false 00:15:31.085 }, 00:15:31.085 "driver_specific": { 00:15:31.085 "lvol": { 00:15:31.085 "lvol_store_uuid": "1b3ec7fe-94f8-45cf-a1e1-762b7917cd02", 00:15:31.085 "base_bdev": "aio_bdev", 00:15:31.085 "thin_provision": false, 00:15:31.085 "num_allocated_clusters": 38, 00:15:31.085 "snapshot": false, 00:15:31.085 "clone": false, 00:15:31.085 "esnap_clone": false 00:15:31.085 } 00:15:31.085 } 00:15:31.085 } 00:15:31.085 ] 00:15:31.345 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:15:31.345 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:31.345 10:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:31.345 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:31.345 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:31.345 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:31.604 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:31.604 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a50082b4-d01b-4b87-ac64-acffc512d5e9 00:15:31.604 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b3ec7fe-94f8-45cf-a1e1-762b7917cd02 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:31.862 00:15:31.862 real 0m16.603s 00:15:31.862 user 0m43.015s 00:15:31.862 sys 0m3.112s 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:31.862 ************************************ 00:15:31.862 END TEST lvs_grow_dirty 00:15:31.862 ************************************ 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:31.862 nvmf_trace.0 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.862 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.122 rmmod nvme_tcp 00:15:32.122 rmmod nvme_fabrics 00:15:32.122 rmmod nvme_keyring 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2645936 ']' 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2645936 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 2645936 ']' 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 2645936 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2645936 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2645936' 00:15:32.122 killing process with pid 2645936 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 2645936 00:15:32.122 10:34:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 2645936 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.693 10:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.599 10:34:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.599 00:15:34.599 real 0m42.150s 00:15:34.599 user 1m3.029s 00:15:34.599 sys 0m9.826s 00:15:34.599 10:34:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:34.599 10:34:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:34.599 ************************************ 00:15:34.599 END TEST nvmf_lvs_grow 00:15:34.599 ************************************ 00:15:34.599 10:34:50 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:34.599 10:34:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:34.599 10:34:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:34.599 10:34:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.599 ************************************ 00:15:34.599 START TEST nvmf_bdev_io_wait 00:15:34.599 ************************************ 00:15:34.599 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:34.599 * Looking for test storage... 00:15:34.599 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:34.599 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.858 10:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:40.198 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:40.198 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:40.198 Found net devices under 0000:27:00.0: cvl_0_0 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.198 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:40.199 Found net devices under 0000:27:00.1: cvl_0_1 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:15:40.199 00:15:40.199 --- 10.0.0.2 ping statistics --- 00:15:40.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.199 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:15:40.199 00:15:40.199 --- 10.0.0.1 ping statistics --- 00:15:40.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.199 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2650616 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2650616 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 2650616 ']' 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:40.199 10:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 [2024-05-15 10:34:55.873106] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:40.199 [2024-05-15 10:34:55.873207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.199 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.199 [2024-05-15 10:34:55.992275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.458 [2024-05-15 10:34:56.090212] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.458 [2024-05-15 10:34:56.090248] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.458 [2024-05-15 10:34:56.090257] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.458 [2024-05-15 10:34:56.090267] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.458 [2024-05-15 10:34:56.090274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.458 [2024-05-15 10:34:56.090428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.458 [2024-05-15 10:34:56.090527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.458 [2024-05-15 10:34:56.090551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.458 [2024-05-15 10:34:56.090559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.720 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:40.720 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:15:40.720 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.720 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:40.720 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 [2024-05-15 10:34:56.730138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 Malloc0 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.981 [2024-05-15 10:34:56.804914] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:40.981 [2024-05-15 10:34:56.805204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2650786 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2650787 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2650789 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2650791 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:40.981 { 00:15:40.981 "params": { 00:15:40.981 "name": "Nvme$subsystem", 00:15:40.981 "trtype": "$TEST_TRANSPORT", 00:15:40.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.981 "adrfam": "ipv4", 00:15:40.981 "trsvcid": "$NVMF_PORT", 00:15:40.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.981 "hdgst": ${hdgst:-false}, 00:15:40.981 "ddgst": ${ddgst:-false} 00:15:40.981 }, 00:15:40.981 "method": "bdev_nvme_attach_controller" 00:15:40.981 } 00:15:40.981 EOF 00:15:40.981 )") 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:40.981 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:40.982 { 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme$subsystem", 00:15:40.982 "trtype": "$TEST_TRANSPORT", 00:15:40.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "$NVMF_PORT", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.982 "hdgst": ${hdgst:-false}, 00:15:40.982 "ddgst": ${ddgst:-false} 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 } 00:15:40.982 EOF 00:15:40.982 )") 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:40.982 { 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme$subsystem", 00:15:40.982 "trtype": "$TEST_TRANSPORT", 00:15:40.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "$NVMF_PORT", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.982 "hdgst": ${hdgst:-false}, 00:15:40.982 "ddgst": ${ddgst:-false} 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 } 00:15:40.982 EOF 00:15:40.982 )") 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:40.982 { 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme$subsystem", 00:15:40.982 "trtype": "$TEST_TRANSPORT", 00:15:40.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "$NVMF_PORT", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.982 "hdgst": ${hdgst:-false}, 00:15:40.982 "ddgst": ${ddgst:-false} 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 } 00:15:40.982 EOF 00:15:40.982 )") 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2650786 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme1", 00:15:40.982 "trtype": "tcp", 00:15:40.982 "traddr": "10.0.0.2", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "4420", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.982 "hdgst": false, 00:15:40.982 "ddgst": false 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 }' 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme1", 00:15:40.982 "trtype": "tcp", 00:15:40.982 "traddr": "10.0.0.2", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "4420", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.982 "hdgst": false, 00:15:40.982 "ddgst": false 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 }' 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme1", 00:15:40.982 "trtype": "tcp", 00:15:40.982 "traddr": "10.0.0.2", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "4420", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.982 "hdgst": false, 00:15:40.982 "ddgst": false 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 }' 00:15:40.982 10:34:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:40.982 "params": { 00:15:40.982 "name": "Nvme1", 00:15:40.982 "trtype": "tcp", 00:15:40.982 "traddr": "10.0.0.2", 00:15:40.982 "adrfam": "ipv4", 00:15:40.982 "trsvcid": "4420", 00:15:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.982 "hdgst": false, 00:15:40.982 "ddgst": false 00:15:40.982 }, 00:15:40.982 "method": "bdev_nvme_attach_controller" 00:15:40.982 }' 00:15:41.242 [2024-05-15 10:34:56.875893] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:41.242 [2024-05-15 10:34:56.876010] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:41.242 [2024-05-15 10:34:56.882118] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:41.242 [2024-05-15 10:34:56.882228] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:41.242 [2024-05-15 10:34:56.884736] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:41.242 [2024-05-15 10:34:56.884843] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:41.242 [2024-05-15 10:34:56.893302] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:41.242 [2024-05-15 10:34:56.893431] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:41.242 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.242 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.500 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.500 [2024-05-15 10:34:57.137621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.500 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.500 [2024-05-15 10:34:57.181536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.500 [2024-05-15 10:34:57.226716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.500 [2024-05-15 10:34:57.276312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:41.500 [2024-05-15 10:34:57.314533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:41.500 [2024-05-15 10:34:57.327540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.500 [2024-05-15 10:34:57.355829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:41.758 [2024-05-15 10:34:57.456606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:42.015 Running I/O for 1 seconds... 00:15:42.015 Running I/O for 1 seconds... 00:15:42.015 Running I/O for 1 seconds... 00:15:42.275 Running I/O for 1 seconds... 00:15:42.845 00:15:42.845 Latency(us) 00:15:42.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.845 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:42.845 Nvme1n1 : 1.00 162880.11 636.25 0.00 0.00 782.54 245.76 1112.39 00:15:42.845 =================================================================================================================== 00:15:42.845 Total : 162880.11 636.25 0.00 0.00 782.54 245.76 1112.39 00:15:42.845 00:15:42.845 Latency(us) 00:15:42.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.845 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:42.845 Nvme1n1 : 1.00 14699.24 57.42 0.00 0.00 8681.04 4691.00 16418.49 00:15:42.845 =================================================================================================================== 00:15:42.845 Total : 14699.24 57.42 0.00 0.00 8681.04 4691.00 16418.49 00:15:42.845 00:15:42.845 Latency(us) 00:15:42.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.845 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:42.845 Nvme1n1 : 1.01 12178.42 47.57 0.00 0.00 10469.28 6036.21 15728.64 00:15:42.845 =================================================================================================================== 00:15:42.845 Total : 12178.42 47.57 0.00 0.00 10469.28 6036.21 15728.64 00:15:43.104 00:15:43.104 Latency(us) 00:15:43.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.104 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:43.104 Nvme1n1 : 1.00 12693.80 49.59 0.00 0.00 10050.20 3069.84 18763.99 00:15:43.104 =================================================================================================================== 00:15:43.104 Total : 12693.80 49.59 0.00 0.00 10050.20 3069.84 18763.99 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2650787 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2650789 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2650791 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.040 rmmod nvme_tcp 00:15:44.040 rmmod nvme_fabrics 00:15:44.040 rmmod nvme_keyring 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2650616 ']' 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2650616 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 2650616 ']' 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 2650616 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2650616 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2650616' 00:15:44.040 killing process with pid 2650616 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 2650616 00:15:44.040 [2024-05-15 10:34:59.660132] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:44.040 10:34:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 2650616 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.301 10:35:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.837 10:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.837 00:15:46.837 real 0m11.776s 00:15:46.837 user 0m24.194s 00:15:46.837 sys 0m5.995s 00:15:46.837 10:35:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:46.837 10:35:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:46.837 ************************************ 00:15:46.837 END TEST nvmf_bdev_io_wait 00:15:46.837 ************************************ 00:15:46.837 10:35:02 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:46.837 10:35:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:46.837 10:35:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:46.837 10:35:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.837 ************************************ 00:15:46.837 START TEST nvmf_queue_depth 00:15:46.837 ************************************ 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:46.837 * Looking for test storage... 00:15:46.837 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:46.837 10:35:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.838 10:35:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:52.118 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:52.118 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:52.118 Found net devices under 0000:27:00.0: cvl_0_0 00:15:52.118 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:52.119 Found net devices under 0000:27:00.1: cvl_0_1 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:52.119 00:15:52.119 --- 10.0.0.2 ping statistics --- 00:15:52.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.119 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:15:52.119 00:15:52.119 --- 10.0.0.1 ping statistics --- 00:15:52.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.119 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2655284 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2655284 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2655284 ']' 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:52.119 10:35:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.119 [2024-05-15 10:35:07.551588] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:52.119 [2024-05-15 10:35:07.551688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.119 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.119 [2024-05-15 10:35:07.670301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.119 [2024-05-15 10:35:07.762619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.119 [2024-05-15 10:35:07.762653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.119 [2024-05-15 10:35:07.762662] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.119 [2024-05-15 10:35:07.762671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.119 [2024-05-15 10:35:07.762678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.119 [2024-05-15 10:35:07.762710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.379 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:52.379 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:15:52.379 10:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.379 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:52.379 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 [2024-05-15 10:35:08.281151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 Malloc0 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 [2024-05-15 10:35:08.359817] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:52.640 [2024-05-15 10:35:08.360082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2655589 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2655589 /var/tmp/bdevperf.sock 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2655589 ']' 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 10:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:52.640 [2024-05-15 10:35:08.437090] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:15:52.640 [2024-05-15 10:35:08.437204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655589 ] 00:15:52.640 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.901 [2024-05-15 10:35:08.553595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.901 [2024-05-15 10:35:08.645758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:53.468 NVMe0n1 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.468 10:35:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:53.468 Running I/O for 10 seconds... 00:16:05.679 00:16:05.679 Latency(us) 00:16:05.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.679 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:05.679 Verification LBA range: start 0x0 length 0x4000 00:16:05.679 NVMe0n1 : 10.07 12473.30 48.72 0.00 0.00 81816.22 17936.17 51325.04 00:16:05.679 =================================================================================================================== 00:16:05.679 Total : 12473.30 48.72 0.00 0.00 81816.22 17936.17 51325.04 00:16:05.679 0 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2655589 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2655589 ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2655589 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2655589 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2655589' 00:16:05.679 killing process with pid 2655589 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2655589 00:16:05.679 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.679 00:16:05.679 Latency(us) 00:16:05.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.679 =================================================================================================================== 00:16:05.679 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2655589 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.679 rmmod nvme_tcp 00:16:05.679 rmmod nvme_fabrics 00:16:05.679 rmmod nvme_keyring 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2655284 ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2655284 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2655284 ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2655284 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2655284 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2655284' 00:16:05.679 killing process with pid 2655284 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2655284 00:16:05.679 [2024-05-15 10:35:19.937551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:05.679 10:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2655284 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.679 10:35:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.097 10:35:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.097 00:16:07.097 real 0m20.298s 00:16:07.097 user 0m25.345s 00:16:07.097 sys 0m4.977s 00:16:07.097 10:35:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:07.097 10:35:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:07.097 ************************************ 00:16:07.097 END TEST nvmf_queue_depth 00:16:07.097 ************************************ 00:16:07.097 10:35:22 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:07.097 10:35:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:07.097 10:35:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:07.097 10:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.097 ************************************ 00:16:07.097 START TEST nvmf_target_multipath 00:16:07.097 ************************************ 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:07.097 * Looking for test storage... 00:16:07.097 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.097 10:35:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.098 10:35:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:12.402 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:12.403 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:12.403 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:12.403 Found net devices under 0000:27:00.0: cvl_0_0 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:12.403 Found net devices under 0000:27:00.1: cvl_0_1 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.403 10:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:12.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:16:12.403 00:16:12.403 --- 10.0.0.2 ping statistics --- 00:16:12.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.403 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:16:12.403 00:16:12.403 --- 10.0.0.1 ping statistics --- 00:16:12.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.403 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:12.403 only one NIC for nvmf test 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.403 rmmod nvme_tcp 00:16:12.403 rmmod nvme_fabrics 00:16:12.403 rmmod nvme_keyring 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.403 10:35:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.937 00:16:14.937 real 0m7.742s 00:16:14.937 user 0m1.493s 00:16:14.937 sys 0m4.113s 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:14.937 10:35:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:14.937 ************************************ 00:16:14.937 END TEST nvmf_target_multipath 00:16:14.937 ************************************ 00:16:14.937 10:35:30 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:14.937 10:35:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:14.937 10:35:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:14.937 10:35:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.937 ************************************ 00:16:14.937 START TEST nvmf_zcopy 00:16:14.937 ************************************ 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:14.937 * Looking for test storage... 00:16:14.937 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.937 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.938 10:35:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:20.210 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:20.210 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:20.210 Found net devices under 0000:27:00.0: cvl_0_0 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.210 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:20.211 Found net devices under 0000:27:00.1: cvl_0_1 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.211 10:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:16:20.468 00:16:20.468 --- 10.0.0.2 ping statistics --- 00:16:20.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.468 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:16:20.468 00:16:20.468 --- 10.0.0.1 ping statistics --- 00:16:20.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.468 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2665703 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2665703 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 2665703 ']' 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:20.468 10:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:20.725 [2024-05-15 10:35:36.349072] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:16:20.725 [2024-05-15 10:35:36.349178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.725 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.725 [2024-05-15 10:35:36.469039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.725 [2024-05-15 10:35:36.566169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.725 [2024-05-15 10:35:36.566206] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.725 [2024-05-15 10:35:36.566215] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.725 [2024-05-15 10:35:36.566224] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.725 [2024-05-15 10:35:36.566232] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.725 [2024-05-15 10:35:36.566258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 [2024-05-15 10:35:37.069646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 [2024-05-15 10:35:37.085603] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:21.293 [2024-05-15 10:35:37.085862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 malloc0 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.293 { 00:16:21.293 "params": { 00:16:21.293 "name": "Nvme$subsystem", 00:16:21.293 "trtype": "$TEST_TRANSPORT", 00:16:21.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.293 "adrfam": "ipv4", 00:16:21.293 "trsvcid": "$NVMF_PORT", 00:16:21.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.293 "hdgst": ${hdgst:-false}, 00:16:21.293 "ddgst": ${ddgst:-false} 00:16:21.293 }, 00:16:21.293 "method": "bdev_nvme_attach_controller" 00:16:21.293 } 00:16:21.293 EOF 00:16:21.293 )") 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:21.293 10:35:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.293 "params": { 00:16:21.293 "name": "Nvme1", 00:16:21.293 "trtype": "tcp", 00:16:21.293 "traddr": "10.0.0.2", 00:16:21.293 "adrfam": "ipv4", 00:16:21.293 "trsvcid": "4420", 00:16:21.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.293 "hdgst": false, 00:16:21.293 "ddgst": false 00:16:21.293 }, 00:16:21.293 "method": "bdev_nvme_attach_controller" 00:16:21.293 }' 00:16:21.553 [2024-05-15 10:35:37.204846] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:16:21.553 [2024-05-15 10:35:37.204952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665894 ] 00:16:21.553 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.553 [2024-05-15 10:35:37.317986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.553 [2024-05-15 10:35:37.408778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.813 Running I/O for 10 seconds... 00:16:34.033 00:16:34.033 Latency(us) 00:16:34.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.033 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:34.033 Verification LBA range: start 0x0 length 0x1000 00:16:34.033 Nvme1n1 : 10.01 8931.41 69.78 0.00 0.00 14293.05 1043.40 21661.37 00:16:34.033 =================================================================================================================== 00:16:34.033 Total : 8931.41 69.78 0.00 0.00 14293.05 1043.40 21661.37 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2668010 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:34.033 { 00:16:34.033 "params": { 00:16:34.033 "name": "Nvme$subsystem", 00:16:34.033 "trtype": "$TEST_TRANSPORT", 00:16:34.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.033 "adrfam": "ipv4", 00:16:34.033 "trsvcid": "$NVMF_PORT", 00:16:34.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.033 "hdgst": ${hdgst:-false}, 00:16:34.033 "ddgst": ${ddgst:-false} 00:16:34.033 }, 00:16:34.033 "method": "bdev_nvme_attach_controller" 00:16:34.033 } 00:16:34.033 EOF 00:16:34.033 )") 00:16:34.033 [2024-05-15 10:35:48.059441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.059493] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:34.033 [2024-05-15 10:35:48.067345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.067366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:34.033 10:35:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:34.033 "params": { 00:16:34.033 "name": "Nvme1", 00:16:34.033 "trtype": "tcp", 00:16:34.033 "traddr": "10.0.0.2", 00:16:34.033 "adrfam": "ipv4", 00:16:34.033 "trsvcid": "4420", 00:16:34.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.033 "hdgst": false, 00:16:34.033 "ddgst": false 00:16:34.033 }, 00:16:34.033 "method": "bdev_nvme_attach_controller" 00:16:34.033 }' 00:16:34.033 [2024-05-15 10:35:48.075352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.075371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.083343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.083361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.091331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.091347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.099341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.099357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.107342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.107358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.115330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.115346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.123341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.123360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.127054] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:16:34.033 [2024-05-15 10:35:48.127167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668010 ] 00:16:34.033 [2024-05-15 10:35:48.131337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.131352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.139348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.139363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.147343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.147358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.155337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.155352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.163351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.163366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.171353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.171370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.179346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.179361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.187357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.187371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.195351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.195364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.033 [2024-05-15 10:35:48.203369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.203384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.211364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.211378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.219354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.219368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.227363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.227375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.235422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.235435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.242663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.033 [2024-05-15 10:35:48.243364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.243378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.251373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.251386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.259366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.259379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.267385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.267401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.275384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.275400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.283370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.283386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.291386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.291403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.299396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.299411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.307379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.307392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.315392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.315406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.323400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.323419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.331404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.331419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.339399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.339413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.340302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.033 [2024-05-15 10:35:48.347406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.347420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.355406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.355421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.363410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.363423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.371399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.371413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.379414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.379428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.387404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.387418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.395423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.395436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.403415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.403431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.411412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.411431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.419424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.419438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.427420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.427433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.435420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.435433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.443425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.443438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.451429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.451443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.459434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.459446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.467433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.467446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.475428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.475442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.483441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.483454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.491447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.491461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.499436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.499449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.507470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.507493] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.515465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.515488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.523479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.523498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.531480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.531500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.539460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.539475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.547466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.547480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.555476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.555490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.563468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.563482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.571480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.571496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.579487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.579511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.587506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.587528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.595493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.595513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.603475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.603490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.611519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.611546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 Running I/O for 5 seconds... 00:16:34.033 [2024-05-15 10:35:48.619503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.619520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.632083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.632111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.642732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.642759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.651238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.651265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.660527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.660554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.669559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.669585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.678698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.678724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.687302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.687329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.696258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.696284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.705278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.705306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.714217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.033 [2024-05-15 10:35:48.714243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.033 [2024-05-15 10:35:48.723342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.723369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.732461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.732488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.741565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.741593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.751014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.751042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.760121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.760146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.769606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.769634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.778717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.778744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.787866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.787892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.797529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.797556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.807119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.807145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.816251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.816280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.825543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.825569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.834775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.834805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.843727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.843754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.852833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.852858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.862064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.862090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.871309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.871335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.880620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.880646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.889651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.889680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.898598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.898623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.907933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.907960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.917105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.917131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.925981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.926007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.936111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.936136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.946021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.946052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.955766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.955792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.965088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.965114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.974845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.974871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.983379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.983405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:48.992657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:48.992684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.002343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.002369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.011544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.011569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.021196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.021221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.030341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.030366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.039476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.039502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.048795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.048822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.057761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.057786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.067302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.067333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.077010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.077036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.086322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.086349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.095642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.095670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.105171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.105198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.114910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.114937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.124643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.124669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.133825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.133851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.142939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.142964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.151795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.151823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.161631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.161659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.170453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.170476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.179459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.179487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.189097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.189122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.198639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.198665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.207714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.207741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.216624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.216651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.226148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.226174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.234563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.234590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.244305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.244334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.254156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.254184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.264383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.264416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.272411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.272441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.284114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.284142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.293626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.293653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.302565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.302593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.312249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.312275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.320673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.320698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.330202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.330228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.338737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.338767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.348505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.348534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.357625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.357652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.366611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.366639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.375685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.375715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.384718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.384744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.394862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.394891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.403971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.403998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.413592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.413621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.423339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.423371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.433193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.433221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.442344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.442371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.451935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.451962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.461104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.461130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.470310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.470338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.479419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.479446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.488460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.488487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.498002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.498030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.507351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.507379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.517026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.517059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.526783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.526811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.535883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.535909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.544495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.544521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.552919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.552944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.562530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.562557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.572208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.572236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.581608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.581634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.590365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.590394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.599870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.599904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.608457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.608486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.617856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.617884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.627241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.627268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.636481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.636508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.645547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.645576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.654657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.654684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.663702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.663729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.672721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.672746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.681805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.681834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.034 [2024-05-15 10:35:49.691637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.034 [2024-05-15 10:35:49.691664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.700869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.700898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.710666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.710692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.720338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.720366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.729678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.729704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.739257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.739285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.748623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.748653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.757822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.757850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.767148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.767173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.776785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.776811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.786508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.786534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.794995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.795021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.804399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.804424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.813928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.813953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.823463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.823488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.833124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.833148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.842631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.842660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.851806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.851833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.861410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.861435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.870009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.870034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.879473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.879498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.889164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.889188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.035 [2024-05-15 10:35:49.898246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.035 [2024-05-15 10:35:49.898273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.907689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.907716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.916933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.916959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.926083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.926109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.935048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.935075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.945139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.945164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.954983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.955013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.964091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.964117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.973635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.973664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.294 [2024-05-15 10:35:49.982173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.294 [2024-05-15 10:35:49.982201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:49.991133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:49.991158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.000274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.000299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.008799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.008838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.018738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.018768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.027285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.027315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.036618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.036648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.045777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.045804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.055029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.055060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.064683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.064708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.073407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.073437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.083151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.083182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.092907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.092934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.101579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.101608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.110767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.110795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.120001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.120027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.129427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.129454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.138742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.138769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.148726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.148752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.157884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.157912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.295 [2024-05-15 10:35:50.167538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.295 [2024-05-15 10:35:50.167563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.176144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.176170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.185663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.185688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.194859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.194886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.204071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.204097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.213427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.213453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.223153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.223177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.232412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.232438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.242067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.242092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.251208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.251235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.260411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.260437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.269768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.269795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.279102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.279129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.288424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.288450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.298196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.298227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.307473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.307501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.317095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.317121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.325629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.325655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.334893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.334918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.344494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.344520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.353733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.353761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.362228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.362254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.371892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.371919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.380538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.380564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.390103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.390128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.399870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.399896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.408913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.408939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.553 [2024-05-15 10:35:50.418093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.553 [2024-05-15 10:35:50.418117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.427381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.427406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.436422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.436449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.445566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.445592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.454808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.454835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.464515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.464539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.474037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.474077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.483807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.483832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.492346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.492373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.501964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.501989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.511549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.511573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.521212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.521239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.530965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.530991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.540168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.540195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.549674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.549699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.559232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.559260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.568362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.568389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.577731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.577757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.586827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.586855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.595771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.595798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.813 [2024-05-15 10:35:50.604938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.813 [2024-05-15 10:35:50.604965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.614690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.614719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.624131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.624158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.634021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.634053] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.643259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.643284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.652514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.652544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.661524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.661550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.671299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.671325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.814 [2024-05-15 10:35:50.680874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.814 [2024-05-15 10:35:50.680901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.690399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.690424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.699543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.699568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.709001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.709027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.718526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.718552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.727551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.727576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.737011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.737038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.746040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.746071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.755341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.755368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.764970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.764997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.774107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.774136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.783094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.783122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.791914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.791939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.801706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.801733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.811269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.811293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.819776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.819802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.829842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.829872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.838704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.838732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.847137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.847167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.074 [2024-05-15 10:35:50.857896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.074 [2024-05-15 10:35:50.857923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.867340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.867366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.876451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.876477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.885823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.885848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.895564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.895592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.905193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.905219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.914522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.914551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.923797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.923823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.932858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.932883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.075 [2024-05-15 10:35:50.942668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.075 [2024-05-15 10:35:50.942694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:50.952465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:50.952494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:50.962142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:50.962167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:50.971318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:50.971344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:50.980615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:50.980641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:50.989190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:50.989215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:50.998659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:50.998688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.007184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.007216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.016790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.016815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.026017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.026048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.035550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.035576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.044160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.044186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.053918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.053946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.063288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.063314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.071900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.071928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.081123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.081151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.090251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.090279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.099624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.335 [2024-05-15 10:35:51.099651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.335 [2024-05-15 10:35:51.109472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.109503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.118691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.118719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.127770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.127796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.136894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.136922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.145766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.145792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.155368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.155396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.164557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.164585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.173777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.173805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.183646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.183673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.192971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.193001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.336 [2024-05-15 10:35:51.202082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.336 [2024-05-15 10:35:51.202109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.211405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.211433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.220297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.220323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.229938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.229964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.239298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.239327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.248601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.248627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.257547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.257574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.266791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.266818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.276352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.276380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.285555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.285582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.294756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.294783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.304394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.304419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.313042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.313078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.322182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.322210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.331921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.331948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.341219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.341247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.350709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.350734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.360572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.360601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.369773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.369801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.379006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.379034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.388292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.388319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.397798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.397824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.406807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.406832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.416651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.416678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.425319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.425344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.434325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.434352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.443933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.443958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.453752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.453776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.597 [2024-05-15 10:35:51.462314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.597 [2024-05-15 10:35:51.462340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.472194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.472221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.481692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.481719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.491053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.491079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.500352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.500377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.509555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.509580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.518744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.518769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.528208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.528231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.537213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.537239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.546336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.546361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.555322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.555348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.564707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.564733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.573910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.573936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.583187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.583212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.592718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.592743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.601988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.602014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.611809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.611837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.621096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.621122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.630614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.630640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.639628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.639653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.648864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.648893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.657740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.657766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.667600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.667629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.676788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.676817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.686298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.686323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.696028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.696061] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.705492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.705522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.714702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.857 [2024-05-15 10:35:51.714728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.857 [2024-05-15 10:35:51.724208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.858 [2024-05-15 10:35:51.724233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.733437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.733466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.742651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.742677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.751826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.751852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.761658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.761683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.770720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.770745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.780398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.780426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.790153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.790179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.798632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.798659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.808258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.808284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.817473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.817500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.826990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.827015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.836031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.836061] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.845146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.845172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.854669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.854692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.863992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.864022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.873671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.873698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.882198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.882228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.891823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.891850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.901012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.901038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.909996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.910020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.919470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.919497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.928492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.928518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.937549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.937574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.947157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.947183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.956694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.956720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.965621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.965648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.974536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.974562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.116 [2024-05-15 10:35:51.983507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.116 [2024-05-15 10:35:51.983533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:51.993048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:51.993075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.002517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.002543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.011706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.011732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.020871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.020896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.030019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.030050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.039222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.039249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.048264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.048290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.057693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.057722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.067396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.067423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.076428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.076453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.086092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.086120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.095924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.095950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.105530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.105555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.115440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.115469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.124085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.124112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.133469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.133496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.143122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.143149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.152917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.152944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.162067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.162095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.171251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.171277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.185677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.185705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.194290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.194315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.203990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.204016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.213357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.213382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.222968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.222995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.231550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.231575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.241145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.241175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.250043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.250080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.259070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.259096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.268437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.268463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.441 [2024-05-15 10:35:52.277509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.441 [2024-05-15 10:35:52.277536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.287091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.287117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.296808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.296834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.306593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.306619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.315681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.315708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.324177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.324202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.333670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.333697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.343472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.343500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.352635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.352661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.361271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.361297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.371203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.371231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.380878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.380906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.390189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.390215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.399884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.399910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.408914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.408941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.418709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.418740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.428029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.428067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.435744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.435769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.446697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.446726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.455997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.456023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.465192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.465221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.474953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.474981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.484753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.484780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.493941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.493969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.502963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.502990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.512729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.512759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.522506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.522535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.532226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.532254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.541383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.541411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.551092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.551120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.559651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.559678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.702 [2024-05-15 10:35:52.568854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.702 [2024-05-15 10:35:52.568882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.577899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.577927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.586929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.586958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.596428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.596456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.605945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.605973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.615158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.615185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.624864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.624895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.634077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.634106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.643868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.643895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.653137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.653164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.662635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.662662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.671705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.671732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.680744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.680773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.690099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.690127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.699625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.699652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.708698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.708725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.717848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.717873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.726839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.726869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.735859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.735887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.744829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.744856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.754338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.754366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.764231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.764257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.773969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.773996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.783670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.783695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.792877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.792906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.801896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.801924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.811060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.811087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.820148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.820173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.961 [2024-05-15 10:35:52.829630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.961 [2024-05-15 10:35:52.829656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.838908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.838937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.848219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.848247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.857721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.857748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.867306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.867335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.877150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.877183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.886853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.886884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.896149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.896177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.905877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.905906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.914973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.915002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.924203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.924231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.933821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.933846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.943116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.943144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.952381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.952407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.961806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.961832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.971003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.971030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.979996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.980021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.989083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.989110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:52.998325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:52.998350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.008007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:53.008032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.016552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:53.016578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.025698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:53.025722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.035314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:53.035340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.045230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:53.045257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.053792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.222 [2024-05-15 10:35:53.053816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.222 [2024-05-15 10:35:53.062783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.223 [2024-05-15 10:35:53.062809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.223 [2024-05-15 10:35:53.072050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.223 [2024-05-15 10:35:53.072075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.223 [2024-05-15 10:35:53.081475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.223 [2024-05-15 10:35:53.081501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.223 [2024-05-15 10:35:53.091609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.223 [2024-05-15 10:35:53.091636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.101406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.101434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.110713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.110739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.120429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.120462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.129198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.129226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.138881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.138909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.148142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.148171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.157691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.157720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.167841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.167866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.176829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.176856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.186634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.186660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.195322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.195348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.204930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.204955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.214008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.214036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.223067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.223094] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.232672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.232696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.241779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.241804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.251626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.251653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.260676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.260701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.270554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.270584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.278609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.278637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.289087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.289114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.298815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.298846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.307962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.307988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.317368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.317393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.326693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.326718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.336350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.336376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.345344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.345372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.354775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.354802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.491 [2024-05-15 10:35:53.364058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.491 [2024-05-15 10:35:53.364086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.373334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.373362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.382567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.382594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.391767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.391795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.400860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.400886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.409826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.409851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.419159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.419184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.428403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.428430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.437690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.437716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.447215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.447242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.456761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.456788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.465275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.465302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.474631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.474663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.484160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.484186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.493808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.493835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.502956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.502982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.512503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.512531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.521514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.521539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.530980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.531007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.540278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.540304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.549929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.549958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.559134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.559159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.567934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.567961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.576963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.576989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.586080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.586106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.595290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.595315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.604868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.604896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.614562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.614590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 [2024-05-15 10:35:53.622741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.622766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.756 00:16:37.756 Latency(us) 00:16:37.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.756 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:37.756 Nvme1n1 : 5.01 17231.00 134.62 0.00 0.00 7420.29 3242.31 18212.11 00:16:37.756 =================================================================================================================== 00:16:37.756 Total : 17231.00 134.62 0.00 0.00 7420.29 3242.31 18212.11 00:16:37.756 [2024-05-15 10:35:53.629241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.756 [2024-05-15 10:35:53.629266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.637247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.637272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.645226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.645241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.653251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.653267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.661231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.661245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.669223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.669236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.677235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.677249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.685239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.685254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.693228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.693241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.701243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.701256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.709234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.709247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.717255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.717269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.725244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.725257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.733247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.733261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.741255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.741268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.749252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.749265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.757264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.757278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.765260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.765274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.773255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.773268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.781266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.781280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.789266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.789280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.797258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.797270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.805269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.805282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.813280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.813295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.821268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.821284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.829280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.829294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.837272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.837285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.845285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.845298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.853284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.853297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.861279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.861293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.869290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.869304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.877293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.877307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.015 [2024-05-15 10:35:53.885295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.015 [2024-05-15 10:35:53.885312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.273 [2024-05-15 10:35:53.893311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.273 [2024-05-15 10:35:53.893329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.273 [2024-05-15 10:35:53.901297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.273 [2024-05-15 10:35:53.901311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.273 [2024-05-15 10:35:53.909311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.273 [2024-05-15 10:35:53.909325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.273 [2024-05-15 10:35:53.917306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.273 [2024-05-15 10:35:53.917320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.925298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.925312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.933311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.933327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.941312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.941326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.949306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.949322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.957312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.957328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.965308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.965322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.973316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.973330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.981325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.981339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.989314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.989328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 [2024-05-15 10:35:53.997325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.274 [2024-05-15 10:35:53.997339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.274 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2668010) - No such process 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2668010 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.274 delay0 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.274 10:35:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:38.274 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.533 [2024-05-15 10:35:54.194434] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:45.105 [2024-05-15 10:36:00.294382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:16:45.105 Initializing NVMe Controllers 00:16:45.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:45.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:45.105 Initialization complete. Launching workers. 00:16:45.105 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 261 00:16:45.105 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 543, failed to submit 38 00:16:45.105 success 350, unsuccess 193, failed 0 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.105 rmmod nvme_tcp 00:16:45.105 rmmod nvme_fabrics 00:16:45.105 rmmod nvme_keyring 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2665703 ']' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2665703 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 2665703 ']' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 2665703 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2665703 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2665703' 00:16:45.105 killing process with pid 2665703 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 2665703 00:16:45.105 [2024-05-15 10:36:00.412797] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 2665703 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.105 10:36:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.644 10:36:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.644 00:16:47.644 real 0m32.559s 00:16:47.644 user 0m46.686s 00:16:47.644 sys 0m7.586s 00:16:47.644 10:36:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:47.644 10:36:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:47.644 ************************************ 00:16:47.644 END TEST nvmf_zcopy 00:16:47.644 ************************************ 00:16:47.644 10:36:02 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:47.644 10:36:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:47.644 10:36:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:47.644 10:36:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.644 ************************************ 00:16:47.644 START TEST nvmf_nmic 00:16:47.644 ************************************ 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:47.644 * Looking for test storage... 00:16:47.644 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.644 10:36:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.645 10:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:52.916 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:52.916 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:52.916 Found net devices under 0000:27:00.0: cvl_0_0 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:52.916 Found net devices under 0000:27:00.1: cvl_0_1 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:16:52.916 00:16:52.916 --- 10.0.0.2 ping statistics --- 00:16:52.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.916 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:16:52.916 00:16:52.916 --- 10.0.0.1 ping statistics --- 00:16:52.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.916 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2674876 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2674876 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 2674876 ']' 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.916 10:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.176 [2024-05-15 10:36:08.805585] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:16:53.176 [2024-05-15 10:36:08.805710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.176 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.176 [2024-05-15 10:36:08.942908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.176 [2024-05-15 10:36:09.047573] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.176 [2024-05-15 10:36:09.047623] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.176 [2024-05-15 10:36:09.047634] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.176 [2024-05-15 10:36:09.047645] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.176 [2024-05-15 10:36:09.047654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.437 [2024-05-15 10:36:09.051077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.437 [2024-05-15 10:36:09.051105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.437 [2024-05-15 10:36:09.051154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.437 [2024-05-15 10:36:09.051155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.696 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.696 [2024-05-15 10:36:09.564069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.955 Malloc0 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.955 [2024-05-15 10:36:09.632862] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:53.955 [2024-05-15 10:36:09.633234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:53.955 test case1: single bdev can't be used in multiple subsystems 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.955 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.956 [2024-05-15 10:36:09.656967] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:53.956 [2024-05-15 10:36:09.656999] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:53.956 [2024-05-15 10:36:09.657011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.956 request: 00:16:53.956 { 00:16:53.956 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:53.956 "namespace": { 00:16:53.956 "bdev_name": "Malloc0", 00:16:53.956 "no_auto_visible": false 00:16:53.956 }, 00:16:53.956 "method": "nvmf_subsystem_add_ns", 00:16:53.956 "req_id": 1 00:16:53.956 } 00:16:53.956 Got JSON-RPC error response 00:16:53.956 response: 00:16:53.956 { 00:16:53.956 "code": -32602, 00:16:53.956 "message": "Invalid parameters" 00:16:53.956 } 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:53.956 Adding namespace failed - expected result. 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:53.956 test case2: host connect to nvmf target in multiple paths 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.956 [2024-05-15 10:36:09.665114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.956 10:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.339 10:36:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:56.719 10:36:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:56.719 10:36:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:16:56.719 10:36:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.719 10:36:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:16:56.719 10:36:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:16:59.254 10:36:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:59.254 [global] 00:16:59.254 thread=1 00:16:59.254 invalidate=1 00:16:59.254 rw=write 00:16:59.254 time_based=1 00:16:59.254 runtime=1 00:16:59.254 ioengine=libaio 00:16:59.254 direct=1 00:16:59.254 bs=4096 00:16:59.254 iodepth=1 00:16:59.254 norandommap=0 00:16:59.254 numjobs=1 00:16:59.254 00:16:59.254 verify_dump=1 00:16:59.254 verify_backlog=512 00:16:59.254 verify_state_save=0 00:16:59.254 do_verify=1 00:16:59.254 verify=crc32c-intel 00:16:59.254 [job0] 00:16:59.254 filename=/dev/nvme0n1 00:16:59.254 Could not set queue depth (nvme0n1) 00:16:59.254 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.254 fio-3.35 00:16:59.254 Starting 1 thread 00:17:00.658 00:17:00.658 job0: (groupid=0, jobs=1): err= 0: pid=2676248: Wed May 15 10:36:16 2024 00:17:00.658 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:17:00.658 slat (nsec): min=4289, max=27351, avg=6003.99, stdev=870.93 00:17:00.658 clat (usec): min=179, max=523, avg=257.14, stdev=35.77 00:17:00.658 lat (usec): min=184, max=529, avg=263.14, stdev=36.06 00:17:00.658 clat percentiles (usec): 00:17:00.658 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:17:00.658 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:17:00.658 | 70.00th=[ 281], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 285], 00:17:00.658 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 347], 99.95th=[ 355], 00:17:00.658 | 99.99th=[ 523] 00:17:00.658 write: IOPS=2091, BW=8364KiB/s (8565kB/s)(8364KiB/1000msec); 0 zone resets 00:17:00.658 slat (usec): min=5, max=25304, avg=19.76, stdev=553.22 00:17:00.658 clat (usec): min=117, max=502, avg=196.41, stdev=15.27 00:17:00.658 lat (usec): min=123, max=25807, avg=216.17, stdev=560.10 00:17:00.658 clat percentiles (usec): 00:17:00.658 | 1.00th=[ 131], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 194], 00:17:00.658 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 196], 60.00th=[ 198], 00:17:00.658 | 70.00th=[ 200], 80.00th=[ 200], 90.00th=[ 204], 95.00th=[ 208], 00:17:00.658 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 302], 99.95th=[ 474], 00:17:00.658 | 99.99th=[ 502] 00:17:00.658 bw ( KiB/s): min= 8192, max= 8192, per=97.94%, avg=8192.00, stdev= 0.00, samples=1 00:17:00.658 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:00.658 lat (usec) : 250=64.41%, 500=35.54%, 750=0.05% 00:17:00.658 cpu : usr=2.10%, sys=3.90%, ctx=4142, majf=0, minf=1 00:17:00.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.658 issued rwts: total=2048,2091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.658 00:17:00.658 Run status group 0 (all jobs): 00:17:00.658 READ: bw=8192KiB/s (8389kB/s), 8192KiB/s-8192KiB/s (8389kB/s-8389kB/s), io=8192KiB (8389kB), run=1000-1000msec 00:17:00.658 WRITE: bw=8364KiB/s (8565kB/s), 8364KiB/s-8364KiB/s (8565kB/s-8565kB/s), io=8364KiB (8565kB), run=1000-1000msec 00:17:00.658 00:17:00.658 Disk stats (read/write): 00:17:00.658 nvme0n1: ios=1733/2048, merge=0/0, ticks=1388/393, in_queue=1781, util=98.80% 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.658 rmmod nvme_tcp 00:17:00.658 rmmod nvme_fabrics 00:17:00.658 rmmod nvme_keyring 00:17:00.658 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2674876 ']' 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2674876 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 2674876 ']' 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 2674876 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2674876 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2674876' 00:17:00.918 killing process with pid 2674876 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 2674876 00:17:00.918 [2024-05-15 10:36:16.581354] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:00.918 10:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 2674876 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.482 10:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.387 10:36:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.387 00:17:03.387 real 0m16.161s 00:17:03.387 user 0m46.231s 00:17:03.387 sys 0m4.962s 00:17:03.387 10:36:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:03.387 10:36:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:03.387 ************************************ 00:17:03.387 END TEST nvmf_nmic 00:17:03.387 ************************************ 00:17:03.387 10:36:19 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:03.387 10:36:19 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:03.387 10:36:19 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:03.387 10:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.387 ************************************ 00:17:03.387 START TEST nvmf_fio_target 00:17:03.387 ************************************ 00:17:03.387 10:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:03.647 * Looking for test storage... 00:17:03.647 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.647 10:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.214 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:10.215 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:10.215 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:10.215 Found net devices under 0000:27:00.0: cvl_0_0 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:10.215 Found net devices under 0000:27:00.1: cvl_0_1 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.215 10:36:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:17:10.215 00:17:10.215 --- 10.0.0.2 ping statistics --- 00:17:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.215 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:17:10.215 00:17:10.215 --- 10.0.0.1 ping statistics --- 00:17:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.215 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2680642 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2680642 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 2680642 ']' 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.215 10:36:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:10.215 [2024-05-15 10:36:25.257089] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:17:10.215 [2024-05-15 10:36:25.257223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.215 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.215 [2024-05-15 10:36:25.401404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.215 [2024-05-15 10:36:25.512821] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.215 [2024-05-15 10:36:25.512862] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.215 [2024-05-15 10:36:25.512871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.215 [2024-05-15 10:36:25.512880] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.215 [2024-05-15 10:36:25.512888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.215 [2024-05-15 10:36:25.513041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.215 [2024-05-15 10:36:25.513143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.215 [2024-05-15 10:36:25.513173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.215 [2024-05-15 10:36:25.513181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.215 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.475 [2024-05-15 10:36:26.156149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.475 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.735 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:10.736 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.736 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:10.736 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.996 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:10.996 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.255 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:11.255 10:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:11.255 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.514 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:11.514 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.773 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:11.773 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.031 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:12.031 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:12.031 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:12.288 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:12.288 10:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.288 10:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:12.288 10:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:12.546 10:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.546 [2024-05-15 10:36:28.402621] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:12.546 [2024-05-15 10:36:28.402957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.806 10:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:12.806 10:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:13.065 10:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.440 10:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:14.440 10:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:17:14.440 10:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.440 10:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:17:14.440 10:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:17:14.440 10:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:17:16.350 10:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:17:16.350 10:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:16.350 10:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.608 10:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:17:16.608 10:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.608 10:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:17:16.608 10:36:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:16.608 [global] 00:17:16.608 thread=1 00:17:16.608 invalidate=1 00:17:16.608 rw=write 00:17:16.608 time_based=1 00:17:16.608 runtime=1 00:17:16.608 ioengine=libaio 00:17:16.608 direct=1 00:17:16.608 bs=4096 00:17:16.608 iodepth=1 00:17:16.608 norandommap=0 00:17:16.608 numjobs=1 00:17:16.608 00:17:16.608 verify_dump=1 00:17:16.608 verify_backlog=512 00:17:16.608 verify_state_save=0 00:17:16.608 do_verify=1 00:17:16.608 verify=crc32c-intel 00:17:16.608 [job0] 00:17:16.608 filename=/dev/nvme0n1 00:17:16.608 [job1] 00:17:16.608 filename=/dev/nvme0n2 00:17:16.608 [job2] 00:17:16.608 filename=/dev/nvme0n3 00:17:16.608 [job3] 00:17:16.608 filename=/dev/nvme0n4 00:17:16.608 Could not set queue depth (nvme0n1) 00:17:16.608 Could not set queue depth (nvme0n2) 00:17:16.608 Could not set queue depth (nvme0n3) 00:17:16.608 Could not set queue depth (nvme0n4) 00:17:16.866 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.866 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.866 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.866 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.866 fio-3.35 00:17:16.866 Starting 4 threads 00:17:18.241 00:17:18.241 job0: (groupid=0, jobs=1): err= 0: pid=2682164: Wed May 15 10:36:33 2024 00:17:18.241 read: IOPS=1028, BW=4115KiB/s (4214kB/s)(4144KiB/1007msec) 00:17:18.241 slat (nsec): min=2919, max=33870, avg=5682.71, stdev=2996.02 00:17:18.241 clat (usec): min=187, max=41057, avg=694.82, stdev=4356.51 00:17:18.241 lat (usec): min=192, max=41090, avg=700.50, stdev=4359.24 00:17:18.241 clat percentiles (usec): 00:17:18.241 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:17:18.241 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 227], 00:17:18.241 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 269], 00:17:18.241 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:18.241 | 99.99th=[41157] 00:17:18.241 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:17:18.241 slat (nsec): min=3587, max=63741, avg=5589.16, stdev=2268.45 00:17:18.241 clat (usec): min=104, max=1694, avg=174.69, stdev=82.96 00:17:18.241 lat (usec): min=110, max=1710, avg=180.28, stdev=83.09 00:17:18.241 clat percentiles (usec): 00:17:18.241 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 122], 00:17:18.241 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 137], 60.00th=[ 153], 00:17:18.241 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:17:18.241 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 1188], 99.95th=[ 1696], 00:17:18.241 | 99.99th=[ 1696] 00:17:18.241 bw ( KiB/s): min= 896, max=11392, per=51.75%, avg=6144.00, stdev=7421.79, samples=2 00:17:18.241 iops : min= 224, max= 2848, avg=1536.00, stdev=1855.45, samples=2 00:17:18.241 lat (usec) : 250=83.44%, 500=15.98% 00:17:18.241 lat (msec) : 2=0.12%, 50=0.47% 00:17:18.241 cpu : usr=0.99%, sys=1.89%, ctx=2573, majf=0, minf=1 00:17:18.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.241 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.241 job1: (groupid=0, jobs=1): err= 0: pid=2682174: Wed May 15 10:36:33 2024 00:17:18.241 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:17:18.241 slat (nsec): min=10331, max=34186, avg=32125.32, stdev=4891.95 00:17:18.241 clat (usec): min=40747, max=41454, avg=40979.04, stdev=123.78 00:17:18.241 lat (usec): min=40780, max=41464, avg=41011.16, stdev=119.67 00:17:18.241 clat percentiles (usec): 00:17:18.241 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:18.241 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:18.241 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:18.241 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:18.241 | 99.99th=[41681] 00:17:18.241 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:17:18.241 slat (nsec): min=5483, max=48996, avg=8825.62, stdev=2280.81 00:17:18.241 clat (usec): min=160, max=526, avg=247.26, stdev=23.98 00:17:18.241 lat (usec): min=169, max=575, avg=256.08, stdev=24.89 00:17:18.241 clat percentiles (usec): 00:17:18.241 | 1.00th=[ 182], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 235], 00:17:18.241 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:17:18.241 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:17:18.241 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 529], 99.95th=[ 529], 00:17:18.241 | 99.99th=[ 529] 00:17:18.241 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.241 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.241 lat (usec) : 250=54.31%, 500=41.39%, 750=0.19% 00:17:18.241 lat (msec) : 50=4.12% 00:17:18.241 cpu : usr=0.48%, sys=0.39%, ctx=535, majf=0, minf=1 00:17:18.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.241 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.241 job2: (groupid=0, jobs=1): err= 0: pid=2682192: Wed May 15 10:36:33 2024 00:17:18.241 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:17:18.241 slat (nsec): min=9204, max=33655, avg=30837.23, stdev=5424.30 00:17:18.241 clat (usec): min=40857, max=41204, avg=40971.19, stdev=78.91 00:17:18.241 lat (usec): min=40889, max=41213, avg=41002.03, stdev=75.70 00:17:18.241 clat percentiles (usec): 00:17:18.241 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:18.241 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:18.241 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:18.241 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:18.241 | 99.99th=[41157] 00:17:18.241 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:17:18.241 slat (usec): min=5, max=108, avg= 8.99, stdev= 4.56 00:17:18.241 clat (usec): min=144, max=533, avg=241.86, stdev=18.90 00:17:18.241 lat (usec): min=150, max=642, avg=250.84, stdev=22.28 00:17:18.241 clat percentiles (usec): 00:17:18.241 | 1.00th=[ 172], 5.00th=[ 223], 10.00th=[ 237], 20.00th=[ 241], 00:17:18.241 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 245], 00:17:18.241 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 249], 95.00th=[ 251], 00:17:18.241 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 537], 99.95th=[ 537], 00:17:18.241 | 99.99th=[ 537] 00:17:18.241 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.241 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.241 lat (usec) : 250=90.64%, 500=5.06%, 750=0.19% 00:17:18.241 lat (msec) : 50=4.12% 00:17:18.241 cpu : usr=0.48%, sys=0.39%, ctx=535, majf=0, minf=1 00:17:18.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.241 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.241 job3: (groupid=0, jobs=1): err= 0: pid=2682197: Wed May 15 10:36:33 2024 00:17:18.241 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:17:18.241 slat (nsec): min=9891, max=33931, avg=31687.73, stdev=4890.35 00:17:18.242 clat (usec): min=40767, max=41236, avg=40968.92, stdev=83.77 00:17:18.242 lat (usec): min=40800, max=41246, avg=41000.61, stdev=80.35 00:17:18.242 clat percentiles (usec): 00:17:18.242 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:18.242 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:18.242 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:18.242 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:18.242 | 99.99th=[41157] 00:17:18.242 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:17:18.242 slat (nsec): min=4713, max=54426, avg=8728.08, stdev=2653.38 00:17:18.242 clat (usec): min=171, max=539, avg=247.62, stdev=24.03 00:17:18.242 lat (usec): min=180, max=593, avg=256.35, stdev=25.11 00:17:18.242 clat percentiles (usec): 00:17:18.242 | 1.00th=[ 186], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 235], 00:17:18.242 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:17:18.242 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:17:18.242 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 537], 99.95th=[ 537], 00:17:18.242 | 99.99th=[ 537] 00:17:18.242 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.242 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.242 lat (usec) : 250=55.24%, 500=40.45%, 750=0.19% 00:17:18.242 lat (msec) : 50=4.12% 00:17:18.242 cpu : usr=0.00%, sys=0.87%, ctx=535, majf=0, minf=1 00:17:18.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.242 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.242 00:17:18.242 Run status group 0 (all jobs): 00:17:18.242 READ: bw=4259KiB/s (4361kB/s), 85.0KiB/s-4115KiB/s (87.1kB/s-4214kB/s), io=4408KiB (4514kB), run=1007-1035msec 00:17:18.242 WRITE: bw=11.6MiB/s (12.2MB/s), 1979KiB/s-6101KiB/s (2026kB/s-6248kB/s), io=12.0MiB (12.6MB), run=1007-1035msec 00:17:18.242 00:17:18.242 Disk stats (read/write): 00:17:18.242 nvme0n1: ios=1086/1536, merge=0/0, ticks=707/259, in_queue=966, util=87.27% 00:17:18.242 nvme0n2: ios=40/512, merge=0/0, ticks=1600/123, in_queue=1723, util=90.05% 00:17:18.242 nvme0n3: ios=74/512, merge=0/0, ticks=775/123, in_queue=898, util=95.01% 00:17:18.242 nvme0n4: ios=42/512, merge=0/0, ticks=1603/128, in_queue=1731, util=94.44% 00:17:18.242 10:36:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:18.242 [global] 00:17:18.242 thread=1 00:17:18.242 invalidate=1 00:17:18.242 rw=randwrite 00:17:18.242 time_based=1 00:17:18.242 runtime=1 00:17:18.242 ioengine=libaio 00:17:18.242 direct=1 00:17:18.242 bs=4096 00:17:18.242 iodepth=1 00:17:18.242 norandommap=0 00:17:18.242 numjobs=1 00:17:18.242 00:17:18.242 verify_dump=1 00:17:18.242 verify_backlog=512 00:17:18.242 verify_state_save=0 00:17:18.242 do_verify=1 00:17:18.242 verify=crc32c-intel 00:17:18.242 [job0] 00:17:18.242 filename=/dev/nvme0n1 00:17:18.242 [job1] 00:17:18.242 filename=/dev/nvme0n2 00:17:18.242 [job2] 00:17:18.242 filename=/dev/nvme0n3 00:17:18.242 [job3] 00:17:18.242 filename=/dev/nvme0n4 00:17:18.242 Could not set queue depth (nvme0n1) 00:17:18.242 Could not set queue depth (nvme0n2) 00:17:18.242 Could not set queue depth (nvme0n3) 00:17:18.242 Could not set queue depth (nvme0n4) 00:17:18.501 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.501 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.501 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.501 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.501 fio-3.35 00:17:18.501 Starting 4 threads 00:17:19.874 00:17:19.874 job0: (groupid=0, jobs=1): err= 0: pid=2682754: Wed May 15 10:36:35 2024 00:17:19.874 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:17:19.874 slat (nsec): min=16019, max=33261, avg=31332.00, stdev=3444.74 00:17:19.874 clat (usec): min=40886, max=41468, avg=40979.77, stdev=116.40 00:17:19.874 lat (usec): min=40918, max=41484, avg=41011.11, stdev=113.17 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:19.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:19.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:19.874 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:19.874 | 99.99th=[41681] 00:17:19.874 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:17:19.874 slat (nsec): min=5600, max=48491, avg=8377.06, stdev=2449.71 00:17:19.874 clat (usec): min=117, max=534, avg=228.83, stdev=38.54 00:17:19.874 lat (usec): min=126, max=582, avg=237.21, stdev=39.27 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[ 133], 5.00th=[ 163], 10.00th=[ 186], 20.00th=[ 202], 00:17:19.874 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 241], 00:17:19.874 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:17:19.874 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 537], 99.95th=[ 537], 00:17:19.874 | 99.99th=[ 537] 00:17:19.874 bw ( KiB/s): min= 4096, max= 4096, per=29.49%, avg=4096.00, stdev= 0.00, samples=1 00:17:19.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:19.874 lat (usec) : 250=67.60%, 500=28.09%, 750=0.19% 00:17:19.874 lat (msec) : 50=4.12% 00:17:19.874 cpu : usr=0.29%, sys=0.49%, ctx=535, majf=0, minf=1 00:17:19.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:19.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:19.874 job1: (groupid=0, jobs=1): err= 0: pid=2682762: Wed May 15 10:36:35 2024 00:17:19.874 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:17:19.874 slat (nsec): min=7718, max=31646, avg=29499.36, stdev=4877.83 00:17:19.874 clat (usec): min=40826, max=41312, avg=40978.80, stdev=90.31 00:17:19.874 lat (usec): min=40857, max=41320, avg=41008.30, stdev=86.30 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:19.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:19.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:19.874 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:19.874 | 99.99th=[41157] 00:17:19.874 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:17:19.874 slat (nsec): min=6134, max=45304, avg=8348.58, stdev=2302.01 00:17:19.874 clat (usec): min=113, max=518, avg=220.96, stdev=40.77 00:17:19.874 lat (usec): min=121, max=563, avg=229.31, stdev=41.40 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[ 120], 5.00th=[ 147], 10.00th=[ 174], 20.00th=[ 188], 00:17:19.874 | 30.00th=[ 200], 40.00th=[ 223], 50.00th=[ 239], 60.00th=[ 243], 00:17:19.874 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 258], 00:17:19.874 | 99.00th=[ 285], 99.50th=[ 420], 99.90th=[ 519], 99.95th=[ 519], 00:17:19.874 | 99.99th=[ 519] 00:17:19.874 bw ( KiB/s): min= 4096, max= 4096, per=29.49%, avg=4096.00, stdev= 0.00, samples=1 00:17:19.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:19.874 lat (usec) : 250=86.14%, 500=9.55%, 750=0.19% 00:17:19.874 lat (msec) : 50=4.12% 00:17:19.874 cpu : usr=0.39%, sys=0.20%, ctx=536, majf=0, minf=1 00:17:19.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:19.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:19.874 job2: (groupid=0, jobs=1): err= 0: pid=2682783: Wed May 15 10:36:35 2024 00:17:19.874 read: IOPS=1663, BW=6655KiB/s (6815kB/s)(6868KiB/1032msec) 00:17:19.874 slat (nsec): min=3528, max=34310, avg=5891.74, stdev=1922.34 00:17:19.874 clat (usec): min=174, max=41244, avg=363.76, stdev=2408.55 00:17:19.874 lat (usec): min=180, max=41253, avg=369.65, stdev=2409.87 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:17:19.874 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:17:19.874 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 00:17:19.874 | 99.00th=[ 388], 99.50th=[ 461], 99.90th=[41157], 99.95th=[41157], 00:17:19.874 | 99.99th=[41157] 00:17:19.874 write: IOPS=1984, BW=7938KiB/s (8128kB/s)(8192KiB/1032msec); 0 zone resets 00:17:19.874 slat (nsec): min=4414, max=51235, avg=7359.83, stdev=2320.08 00:17:19.874 clat (usec): min=99, max=3584, avg=182.79, stdev=94.94 00:17:19.874 lat (usec): min=104, max=3590, avg=190.15, stdev=95.91 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:17:19.874 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 153], 60.00th=[ 182], 00:17:19.874 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 265], 00:17:19.874 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 343], 99.95th=[ 652], 00:17:19.874 | 99.99th=[ 3589] 00:17:19.874 bw ( KiB/s): min= 6256, max=10128, per=58.97%, avg=8192.00, stdev=2737.92, samples=2 00:17:19.874 iops : min= 1564, max= 2532, avg=2048.00, stdev=684.48, samples=2 00:17:19.874 lat (usec) : 100=0.03%, 250=91.31%, 500=8.45%, 750=0.03% 00:17:19.874 lat (msec) : 4=0.03%, 50=0.16% 00:17:19.874 cpu : usr=1.75%, sys=3.20%, ctx=3768, majf=0, minf=1 00:17:19.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:19.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 issued rwts: total=1717,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:19.874 job3: (groupid=0, jobs=1): err= 0: pid=2682791: Wed May 15 10:36:35 2024 00:17:19.874 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:17:19.874 slat (nsec): min=8497, max=37635, avg=30562.82, stdev=7249.38 00:17:19.874 clat (usec): min=40837, max=41080, avg=40958.63, stdev=60.34 00:17:19.874 lat (usec): min=40870, max=41096, avg=40989.20, stdev=59.92 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:19.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:19.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:19.874 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:19.874 | 99.99th=[41157] 00:17:19.874 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:17:19.874 slat (nsec): min=4300, max=45610, avg=8490.06, stdev=2803.69 00:17:19.874 clat (usec): min=126, max=504, avg=232.92, stdev=36.69 00:17:19.874 lat (usec): min=135, max=549, avg=241.41, stdev=37.15 00:17:19.874 clat percentiles (usec): 00:17:19.874 | 1.00th=[ 149], 5.00th=[ 172], 10.00th=[ 184], 20.00th=[ 200], 00:17:19.874 | 30.00th=[ 225], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:17:19.874 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 281], 00:17:19.874 | 99.00th=[ 314], 99.50th=[ 371], 99.90th=[ 506], 99.95th=[ 506], 00:17:19.874 | 99.99th=[ 506] 00:17:19.874 bw ( KiB/s): min= 4096, max= 4096, per=29.49%, avg=4096.00, stdev= 0.00, samples=1 00:17:19.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:19.874 lat (usec) : 250=78.09%, 500=17.60%, 750=0.19% 00:17:19.874 lat (msec) : 50=4.12% 00:17:19.874 cpu : usr=0.10%, sys=0.68%, ctx=536, majf=0, minf=1 00:17:19.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:19.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.875 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:19.875 00:17:19.875 Run status group 0 (all jobs): 00:17:19.875 READ: bw=6911KiB/s (7077kB/s), 85.7KiB/s-6655KiB/s (87.7kB/s-6815kB/s), io=7132KiB (7303kB), run=1021-1032msec 00:17:19.875 WRITE: bw=13.6MiB/s (14.2MB/s), 1994KiB/s-7938KiB/s (2042kB/s-8128kB/s), io=14.0MiB (14.7MB), run=1021-1032msec 00:17:19.875 00:17:19.875 Disk stats (read/write): 00:17:19.875 nvme0n1: ios=67/512, merge=0/0, ticks=1054/112, in_queue=1166, util=89.88% 00:17:19.875 nvme0n2: ios=41/512, merge=0/0, ticks=1642/109, in_queue=1751, util=92.76% 00:17:19.875 nvme0n3: ios=1746/2048, merge=0/0, ticks=1082/360, in_queue=1442, util=97.47% 00:17:19.875 nvme0n4: ios=80/512, merge=0/0, ticks=1268/117, in_queue=1385, util=98.19% 00:17:19.875 10:36:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:19.875 [global] 00:17:19.875 thread=1 00:17:19.875 invalidate=1 00:17:19.875 rw=write 00:17:19.875 time_based=1 00:17:19.875 runtime=1 00:17:19.875 ioengine=libaio 00:17:19.875 direct=1 00:17:19.875 bs=4096 00:17:19.875 iodepth=128 00:17:19.875 norandommap=0 00:17:19.875 numjobs=1 00:17:19.875 00:17:19.875 verify_dump=1 00:17:19.875 verify_backlog=512 00:17:19.875 verify_state_save=0 00:17:19.875 do_verify=1 00:17:19.875 verify=crc32c-intel 00:17:19.875 [job0] 00:17:19.875 filename=/dev/nvme0n1 00:17:19.875 [job1] 00:17:19.875 filename=/dev/nvme0n2 00:17:19.875 [job2] 00:17:19.875 filename=/dev/nvme0n3 00:17:19.875 [job3] 00:17:19.875 filename=/dev/nvme0n4 00:17:19.875 Could not set queue depth (nvme0n1) 00:17:19.875 Could not set queue depth (nvme0n2) 00:17:19.875 Could not set queue depth (nvme0n3) 00:17:19.875 Could not set queue depth (nvme0n4) 00:17:20.133 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.133 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.133 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.133 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.133 fio-3.35 00:17:20.133 Starting 4 threads 00:17:21.582 00:17:21.582 job0: (groupid=0, jobs=1): err= 0: pid=2683353: Wed May 15 10:36:37 2024 00:17:21.582 read: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec) 00:17:21.582 slat (nsec): min=908, max=11915k, avg=101492.46, stdev=768706.61 00:17:21.582 clat (usec): min=1745, max=23990, avg=12632.62, stdev=3432.98 00:17:21.582 lat (usec): min=2244, max=24026, avg=12734.11, stdev=3473.98 00:17:21.582 clat percentiles (usec): 00:17:21.582 | 1.00th=[ 3982], 5.00th=[ 6980], 10.00th=[10421], 20.00th=[11207], 00:17:21.582 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:17:21.582 | 70.00th=[12125], 80.00th=[15008], 90.00th=[17957], 95.00th=[20055], 00:17:21.582 | 99.00th=[22414], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987], 00:17:21.582 | 99.99th=[23987] 00:17:21.582 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:21.582 slat (nsec): min=1548, max=51923k, avg=146249.54, stdev=1905174.03 00:17:21.582 clat (usec): min=962, max=248259, avg=14911.33, stdev=19258.13 00:17:21.582 lat (msec): min=2, max=248, avg=15.06, stdev=19.60 00:17:21.582 clat percentiles (msec): 00:17:21.582 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:17:21.582 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:17:21.582 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 48], 00:17:21.582 | 99.00th=[ 100], 99.50th=[ 148], 99.90th=[ 249], 99.95th=[ 249], 00:17:21.582 | 99.99th=[ 249] 00:17:21.582 bw ( KiB/s): min=10480, max=22272, per=20.55%, avg=16376.00, stdev=8338.20, samples=2 00:17:21.582 iops : min= 2620, max= 5568, avg=4094.00, stdev=2084.55, samples=2 00:17:21.582 lat (usec) : 1000=0.01% 00:17:21.582 lat (msec) : 2=0.01%, 4=1.43%, 10=11.08%, 20=81.66%, 50=3.36% 00:17:21.582 lat (msec) : 100=2.04%, 250=0.41% 00:17:21.582 cpu : usr=1.60%, sys=1.60%, ctx=601, majf=0, minf=1 00:17:21.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:21.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.582 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.583 job1: (groupid=0, jobs=1): err= 0: pid=2683357: Wed May 15 10:36:37 2024 00:17:21.583 read: IOPS=5580, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:17:21.583 slat (nsec): min=907, max=11651k, avg=100729.01, stdev=748566.01 00:17:21.583 clat (usec): min=2917, max=23219, avg=12111.01, stdev=3180.51 00:17:21.583 lat (usec): min=3343, max=30053, avg=12211.74, stdev=3239.47 00:17:21.583 clat percentiles (usec): 00:17:21.583 | 1.00th=[ 5276], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[ 9765], 00:17:21.583 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:17:21.583 | 70.00th=[12256], 80.00th=[14222], 90.00th=[16909], 95.00th=[19006], 00:17:21.583 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23200], 99.95th=[23200], 00:17:21.583 | 99.99th=[23200] 00:17:21.583 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:17:21.583 slat (nsec): min=1587, max=9970.1k, avg=74480.01, stdev=384996.86 00:17:21.583 clat (usec): min=909, max=23214, avg=10391.07, stdev=2146.63 00:17:21.583 lat (usec): min=2499, max=23217, avg=10465.55, stdev=2167.76 00:17:21.583 clat percentiles (usec): 00:17:21.583 | 1.00th=[ 3392], 5.00th=[ 5932], 10.00th=[ 7177], 20.00th=[ 9372], 00:17:21.583 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:17:21.583 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:17:21.583 | 99.00th=[14222], 99.50th=[16909], 99.90th=[21890], 99.95th=[23200], 00:17:21.583 | 99.99th=[23200] 00:17:21.583 bw ( KiB/s): min=20480, max=24576, per=28.27%, avg=22528.00, stdev=2896.31, samples=2 00:17:21.583 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:17:21.583 lat (usec) : 1000=0.01% 00:17:21.583 lat (msec) : 4=0.87%, 10=25.60%, 20=71.73%, 50=1.79% 00:17:21.583 cpu : usr=2.78%, sys=3.57%, ctx=697, majf=0, minf=1 00:17:21.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:21.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.583 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.583 job2: (groupid=0, jobs=1): err= 0: pid=2683358: Wed May 15 10:36:37 2024 00:17:21.583 read: IOPS=4788, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1008msec) 00:17:21.583 slat (nsec): min=897, max=12460k, avg=111701.33, stdev=805102.85 00:17:21.583 clat (usec): min=3427, max=25632, avg=13473.04, stdev=3570.38 00:17:21.583 lat (usec): min=3432, max=25637, avg=13584.75, stdev=3623.48 00:17:21.583 clat percentiles (usec): 00:17:21.583 | 1.00th=[ 5276], 5.00th=[ 8455], 10.00th=[10421], 20.00th=[11076], 00:17:21.583 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12911], 60.00th=[13042], 00:17:21.583 | 70.00th=[13304], 80.00th=[15664], 90.00th=[19530], 95.00th=[21103], 00:17:21.583 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 00:17:21.583 | 99.99th=[25560] 00:17:21.583 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:17:21.583 slat (nsec): min=1634, max=11172k, avg=86764.74, stdev=456810.18 00:17:21.583 clat (usec): min=2038, max=32306, avg=12265.23, stdev=4077.01 00:17:21.583 lat (usec): min=2045, max=32309, avg=12351.99, stdev=4110.50 00:17:21.583 clat percentiles (usec): 00:17:21.583 | 1.00th=[ 3228], 5.00th=[ 5800], 10.00th=[ 8029], 20.00th=[10552], 00:17:21.583 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12518], 60.00th=[12911], 00:17:21.583 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[17171], 00:17:21.583 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:17:21.583 | 99.99th=[32375] 00:17:21.583 bw ( KiB/s): min=19728, max=21232, per=25.70%, avg=20480.00, stdev=1063.49, samples=2 00:17:21.583 iops : min= 4932, max= 5308, avg=5120.00, stdev=265.87, samples=2 00:17:21.583 lat (msec) : 4=1.10%, 10=10.58%, 20=83.06%, 50=5.27% 00:17:21.583 cpu : usr=2.58%, sys=3.28%, ctx=618, majf=0, minf=1 00:17:21.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:21.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.583 issued rwts: total=4827,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.583 job3: (groupid=0, jobs=1): err= 0: pid=2683359: Wed May 15 10:36:37 2024 00:17:21.583 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:17:21.583 slat (nsec): min=972, max=12858k, avg=112295.72, stdev=804559.33 00:17:21.583 clat (usec): min=3889, max=25622, avg=13404.67, stdev=3325.01 00:17:21.583 lat (usec): min=3893, max=25631, avg=13516.97, stdev=3379.57 00:17:21.583 clat percentiles (usec): 00:17:21.583 | 1.00th=[ 4752], 5.00th=[10028], 10.00th=[10421], 20.00th=[11469], 00:17:21.583 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:17:21.583 | 70.00th=[13566], 80.00th=[14222], 90.00th=[19006], 95.00th=[20841], 00:17:21.583 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 00:17:21.583 | 99.99th=[25560] 00:17:21.583 write: IOPS=5236, BW=20.5MiB/s (21.4MB/s)(20.7MiB/1011msec); 0 zone resets 00:17:21.583 slat (nsec): min=1619, max=9238.0k, avg=75388.26, stdev=340994.93 00:17:21.583 clat (usec): min=872, max=25588, avg=11236.49, stdev=2948.03 00:17:21.583 lat (usec): min=881, max=25591, avg=11311.88, stdev=2970.63 00:17:21.583 clat percentiles (usec): 00:17:21.583 | 1.00th=[ 2966], 5.00th=[ 5145], 10.00th=[ 6587], 20.00th=[ 8979], 00:17:21.583 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12256], 60.00th=[12780], 00:17:21.583 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13566], 95.00th=[13960], 00:17:21.583 | 99.00th=[16909], 99.50th=[17171], 99.90th=[24773], 99.95th=[24773], 00:17:21.583 | 99.99th=[25560] 00:17:21.583 bw ( KiB/s): min=18888, max=22448, per=25.94%, avg=20668.00, stdev=2517.30, samples=2 00:17:21.583 iops : min= 4722, max= 5612, avg=5167.00, stdev=629.33, samples=2 00:17:21.583 lat (usec) : 1000=0.03% 00:17:21.583 lat (msec) : 2=0.09%, 4=1.26%, 10=13.79%, 20=81.36%, 50=3.48% 00:17:21.583 cpu : usr=1.29%, sys=4.95%, ctx=673, majf=0, minf=1 00:17:21.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:21.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.583 issued rwts: total=5120,5294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.583 00:17:21.583 Run status group 0 (all jobs): 00:17:21.583 READ: bw=74.5MiB/s (78.1MB/s), 14.4MiB/s-21.8MiB/s (15.1MB/s-22.9MB/s), io=75.3MiB (79.0MB), run=1004-1011msec 00:17:21.583 WRITE: bw=77.8MiB/s (81.6MB/s), 15.9MiB/s-21.8MiB/s (16.7MB/s-22.9MB/s), io=78.7MiB (82.5MB), run=1004-1011msec 00:17:21.583 00:17:21.583 Disk stats (read/write): 00:17:21.583 nvme0n1: ios=2959/3072, merge=0/0, ticks=37062/32339, in_queue=69401, util=99.70% 00:17:21.583 nvme0n2: ios=4630/4847, merge=0/0, ticks=54563/48728, in_queue=103291, util=94.29% 00:17:21.583 nvme0n3: ios=4128/4239, merge=0/0, ticks=54053/50536, in_queue=104589, util=97.68% 00:17:21.583 nvme0n4: ios=4138/4607, merge=0/0, ticks=53490/50266, in_queue=103756, util=98.51% 00:17:21.583 10:36:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:21.583 [global] 00:17:21.583 thread=1 00:17:21.583 invalidate=1 00:17:21.583 rw=randwrite 00:17:21.583 time_based=1 00:17:21.583 runtime=1 00:17:21.583 ioengine=libaio 00:17:21.583 direct=1 00:17:21.583 bs=4096 00:17:21.583 iodepth=128 00:17:21.583 norandommap=0 00:17:21.583 numjobs=1 00:17:21.583 00:17:21.583 verify_dump=1 00:17:21.583 verify_backlog=512 00:17:21.583 verify_state_save=0 00:17:21.583 do_verify=1 00:17:21.583 verify=crc32c-intel 00:17:21.583 [job0] 00:17:21.583 filename=/dev/nvme0n1 00:17:21.583 [job1] 00:17:21.583 filename=/dev/nvme0n2 00:17:21.583 [job2] 00:17:21.583 filename=/dev/nvme0n3 00:17:21.583 [job3] 00:17:21.583 filename=/dev/nvme0n4 00:17:21.583 Could not set queue depth (nvme0n1) 00:17:21.583 Could not set queue depth (nvme0n2) 00:17:21.583 Could not set queue depth (nvme0n3) 00:17:21.583 Could not set queue depth (nvme0n4) 00:17:21.842 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.842 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.842 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.842 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.842 fio-3.35 00:17:21.842 Starting 4 threads 00:17:23.230 00:17:23.230 job0: (groupid=0, jobs=1): err= 0: pid=2683833: Wed May 15 10:36:38 2024 00:17:23.230 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:17:23.230 slat (nsec): min=866, max=15461k, avg=114481.55, stdev=732680.30 00:17:23.230 clat (usec): min=5828, max=39390, avg=14886.51, stdev=6069.43 00:17:23.230 lat (usec): min=6075, max=39404, avg=15000.99, stdev=6117.62 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 7177], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:17:23.230 | 30.00th=[10290], 40.00th=[11731], 50.00th=[12780], 60.00th=[14091], 00:17:23.230 | 70.00th=[16188], 80.00th=[20317], 90.00th=[24249], 95.00th=[26084], 00:17:23.230 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:17:23.230 | 99.99th=[39584] 00:17:23.230 write: IOPS=4515, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1006msec); 0 zone resets 00:17:23.230 slat (nsec): min=1572, max=15011k, avg=114411.62, stdev=698358.99 00:17:23.230 clat (usec): min=336, max=36070, avg=14574.07, stdev=5862.52 00:17:23.230 lat (usec): min=5501, max=36077, avg=14688.48, stdev=5909.73 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 6521], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[ 9896], 00:17:23.230 | 30.00th=[10290], 40.00th=[10683], 50.00th=[12256], 60.00th=[13042], 00:17:23.230 | 70.00th=[16909], 80.00th=[20317], 90.00th=[24249], 95.00th=[26608], 00:17:23.230 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:17:23.230 | 99.99th=[35914] 00:17:23.230 bw ( KiB/s): min=16432, max=18888, per=23.31%, avg=17660.00, stdev=1736.65, samples=2 00:17:23.230 iops : min= 4108, max= 4722, avg=4415.00, stdev=434.16, samples=2 00:17:23.230 lat (usec) : 500=0.01% 00:17:23.230 lat (msec) : 10=24.20%, 20=54.74%, 50=21.04% 00:17:23.230 cpu : usr=1.69%, sys=3.78%, ctx=465, majf=0, minf=1 00:17:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.230 issued rwts: total=4096,4543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.230 job1: (groupid=0, jobs=1): err= 0: pid=2683834: Wed May 15 10:36:38 2024 00:17:23.230 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:17:23.230 slat (nsec): min=790, max=28415k, avg=96109.65, stdev=1020594.47 00:17:23.230 clat (usec): min=1472, max=48894, avg=15570.30, stdev=9068.60 00:17:23.230 lat (usec): min=1478, max=53441, avg=15666.41, stdev=9128.61 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 3359], 5.00th=[ 4424], 10.00th=[ 5604], 20.00th=[ 8586], 00:17:23.230 | 30.00th=[10421], 40.00th=[11731], 50.00th=[12518], 60.00th=[14877], 00:17:23.230 | 70.00th=[17957], 80.00th=[23987], 90.00th=[30016], 95.00th=[32113], 00:17:23.230 | 99.00th=[42206], 99.50th=[43779], 99.90th=[48497], 99.95th=[48497], 00:17:23.230 | 99.99th=[49021] 00:17:23.230 write: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1007msec); 0 zone resets 00:17:23.230 slat (nsec): min=1522, max=16146k, avg=67476.12, stdev=579842.81 00:17:23.230 clat (usec): min=713, max=61938, avg=11429.45, stdev=8169.19 00:17:23.230 lat (usec): min=718, max=61941, avg=11496.92, stdev=8196.80 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 906], 5.00th=[ 3130], 10.00th=[ 4817], 20.00th=[ 6652], 00:17:23.230 | 30.00th=[ 7963], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10814], 00:17:23.230 | 70.00th=[12256], 80.00th=[14091], 90.00th=[17171], 95.00th=[22152], 00:17:23.230 | 99.00th=[60031], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:17:23.230 | 99.99th=[62129] 00:17:23.230 bw ( KiB/s): min=17872, max=20480, per=25.31%, avg=19176.00, stdev=1844.13, samples=2 00:17:23.230 iops : min= 4468, max= 5120, avg=4794.00, stdev=461.03, samples=2 00:17:23.230 lat (usec) : 750=0.07%, 1000=0.82% 00:17:23.230 lat (msec) : 2=1.00%, 4=3.68%, 10=34.14%, 20=45.11%, 50=14.54% 00:17:23.230 lat (msec) : 100=0.63% 00:17:23.230 cpu : usr=2.88%, sys=3.98%, ctx=325, majf=0, minf=1 00:17:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.230 issued rwts: total=4608,4922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.230 job2: (groupid=0, jobs=1): err= 0: pid=2683835: Wed May 15 10:36:38 2024 00:17:23.230 read: IOPS=4330, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1004msec) 00:17:23.230 slat (nsec): min=945, max=15747k, avg=119296.05, stdev=878271.23 00:17:23.230 clat (usec): min=1195, max=35487, avg=13978.26, stdev=4354.46 00:17:23.230 lat (usec): min=3769, max=35492, avg=14097.56, stdev=4405.08 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 4686], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10945], 00:17:23.230 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13435], 00:17:23.230 | 70.00th=[15270], 80.00th=[17171], 90.00th=[19792], 95.00th=[22938], 00:17:23.230 | 99.00th=[30540], 99.50th=[30802], 99.90th=[32113], 99.95th=[32113], 00:17:23.230 | 99.99th=[35390] 00:17:23.230 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:23.230 slat (nsec): min=1588, max=16554k, avg=102012.11, stdev=575727.80 00:17:23.230 clat (usec): min=2291, max=45525, avg=14389.19, stdev=7096.92 00:17:23.230 lat (usec): min=2296, max=45533, avg=14491.20, stdev=7134.35 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 3294], 5.00th=[ 5669], 10.00th=[ 8586], 20.00th=[11207], 00:17:23.230 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12518], 00:17:23.230 | 70.00th=[13304], 80.00th=[19006], 90.00th=[25297], 95.00th=[27395], 00:17:23.230 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:17:23.230 | 99.99th=[45351] 00:17:23.230 bw ( KiB/s): min=16496, max=20368, per=24.33%, avg=18432.00, stdev=2737.92, samples=2 00:17:23.230 iops : min= 4124, max= 5092, avg=4608.00, stdev=684.48, samples=2 00:17:23.230 lat (msec) : 2=0.01%, 4=1.12%, 10=9.59%, 20=76.09%, 50=13.19% 00:17:23.230 cpu : usr=1.69%, sys=2.79%, ctx=644, majf=0, minf=1 00:17:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.230 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.230 job3: (groupid=0, jobs=1): err= 0: pid=2683836: Wed May 15 10:36:38 2024 00:17:23.230 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:17:23.230 slat (nsec): min=918, max=11847k, avg=116331.32, stdev=827680.08 00:17:23.230 clat (usec): min=4254, max=45651, avg=13695.62, stdev=5404.18 00:17:23.230 lat (usec): min=4259, max=45657, avg=13811.95, stdev=5467.53 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 5800], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10683], 00:17:23.230 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12911], 00:17:23.230 | 70.00th=[14353], 80.00th=[15664], 90.00th=[18482], 95.00th=[20841], 00:17:23.230 | 99.00th=[40633], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:17:23.230 | 99.99th=[45876] 00:17:23.230 write: IOPS=5022, BW=19.6MiB/s (20.6MB/s)(19.8MiB/1011msec); 0 zone resets 00:17:23.230 slat (nsec): min=1547, max=10370k, avg=88435.05, stdev=491270.66 00:17:23.230 clat (usec): min=1083, max=46863, avg=12819.55, stdev=6588.32 00:17:23.230 lat (usec): min=1092, max=46866, avg=12907.99, stdev=6613.07 00:17:23.230 clat percentiles (usec): 00:17:23.230 | 1.00th=[ 3097], 5.00th=[ 5866], 10.00th=[ 7439], 20.00th=[10421], 00:17:23.230 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:17:23.230 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16057], 95.00th=[22938], 00:17:23.230 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:17:23.230 | 99.99th=[46924] 00:17:23.230 bw ( KiB/s): min=19128, max=20480, per=26.14%, avg=19804.00, stdev=956.01, samples=2 00:17:23.230 iops : min= 4782, max= 5120, avg=4951.00, stdev=239.00, samples=2 00:17:23.230 lat (msec) : 2=0.19%, 4=0.84%, 10=11.49%, 20=81.14%, 50=6.35% 00:17:23.230 cpu : usr=1.88%, sys=3.86%, ctx=646, majf=0, minf=1 00:17:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.230 issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.230 00:17:23.230 Run status group 0 (all jobs): 00:17:23.230 READ: bw=68.2MiB/s (71.5MB/s), 15.9MiB/s-17.9MiB/s (16.7MB/s-18.7MB/s), io=69.0MiB (72.3MB), run=1004-1011msec 00:17:23.230 WRITE: bw=74.0MiB/s (77.6MB/s), 17.6MiB/s-19.6MiB/s (18.5MB/s-20.6MB/s), io=74.8MiB (78.4MB), run=1004-1011msec 00:17:23.230 00:17:23.230 Disk stats (read/write): 00:17:23.230 nvme0n1: ios=3611/3847, merge=0/0, ticks=26232/26050, in_queue=52282, util=88.38% 00:17:23.230 nvme0n2: ios=3680/4096, merge=0/0, ticks=39657/35283, in_queue=74940, util=89.40% 00:17:23.230 nvme0n3: ios=3603/3735, merge=0/0, ticks=49498/55246, in_queue=104744, util=98.95% 00:17:23.230 nvme0n4: ios=3766/4096, merge=0/0, ticks=50019/53140, in_queue=103159, util=94.14% 00:17:23.230 10:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:23.230 10:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2684072 00:17:23.230 10:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:23.230 10:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:23.230 [global] 00:17:23.230 thread=1 00:17:23.230 invalidate=1 00:17:23.230 rw=read 00:17:23.230 time_based=1 00:17:23.230 runtime=10 00:17:23.230 ioengine=libaio 00:17:23.231 direct=1 00:17:23.231 bs=4096 00:17:23.231 iodepth=1 00:17:23.231 norandommap=1 00:17:23.231 numjobs=1 00:17:23.231 00:17:23.231 [job0] 00:17:23.231 filename=/dev/nvme0n1 00:17:23.231 [job1] 00:17:23.231 filename=/dev/nvme0n2 00:17:23.231 [job2] 00:17:23.231 filename=/dev/nvme0n3 00:17:23.231 [job3] 00:17:23.231 filename=/dev/nvme0n4 00:17:23.231 Could not set queue depth (nvme0n1) 00:17:23.231 Could not set queue depth (nvme0n2) 00:17:23.231 Could not set queue depth (nvme0n3) 00:17:23.231 Could not set queue depth (nvme0n4) 00:17:23.490 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.490 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.490 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.490 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.490 fio-3.35 00:17:23.490 Starting 4 threads 00:17:26.017 10:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:26.274 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=37748736, buflen=4096 00:17:26.275 fio: pid=2684302, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.275 10:36:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:26.275 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=25526272, buflen=4096 00:17:26.275 fio: pid=2684301, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.275 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.275 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:26.534 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.534 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:26.534 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=290816, buflen=4096 00:17:26.534 fio: pid=2684299, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.534 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.534 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=389120, buflen=4096 00:17:26.534 fio: pid=2684300, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.534 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:26.793 00:17:26.793 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2684299: Wed May 15 10:36:42 2024 00:17:26.793 read: IOPS=24, BW=98.4KiB/s (101kB/s)(284KiB/2885msec) 00:17:26.793 slat (usec): min=8, max=8570, avg=149.22, stdev=1006.50 00:17:26.793 clat (usec): min=1011, max=41942, avg=40459.95, stdev=4754.45 00:17:26.793 lat (usec): min=1100, max=50117, avg=40610.81, stdev=4881.73 00:17:26.793 clat percentiles (usec): 00:17:26.793 | 1.00th=[ 1012], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:26.793 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:26.793 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:26.793 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:26.793 | 99.99th=[41681] 00:17:26.793 bw ( KiB/s): min= 96, max= 104, per=0.47%, avg=97.60, stdev= 3.58, samples=5 00:17:26.793 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:26.793 lat (msec) : 2=1.39%, 50=97.22% 00:17:26.793 cpu : usr=0.17%, sys=0.00%, ctx=74, majf=0, minf=1 00:17:26.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.793 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.793 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.793 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2684300: Wed May 15 10:36:42 2024 00:17:26.793 read: IOPS=31, BW=125KiB/s (128kB/s)(380KiB/3037msec) 00:17:26.793 slat (usec): min=4, max=11722, avg=194.63, stdev=1282.74 00:17:26.793 clat (usec): min=201, max=42285, avg=31758.69, stdev=17865.28 00:17:26.793 lat (usec): min=228, max=47026, avg=31831.98, stdev=17906.44 00:17:26.793 clat percentiles (usec): 00:17:26.793 | 1.00th=[ 202], 5.00th=[ 255], 10.00th=[ 318], 20.00th=[ 347], 00:17:26.793 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:17:26.793 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:26.793 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:26.793 | 99.99th=[42206] 00:17:26.793 bw ( KiB/s): min= 104, max= 224, per=0.64%, avg=131.20, stdev=52.34, samples=5 00:17:26.793 iops : min= 26, max= 56, avg=32.80, stdev=13.08, samples=5 00:17:26.793 lat (usec) : 250=3.12%, 500=19.79%, 750=1.04% 00:17:26.793 lat (msec) : 50=75.00% 00:17:26.793 cpu : usr=0.13%, sys=0.00%, ctx=98, majf=0, minf=1 00:17:26.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.793 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.793 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.794 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2684301: Wed May 15 10:36:42 2024 00:17:26.794 read: IOPS=2287, BW=9148KiB/s (9367kB/s)(24.3MiB/2725msec) 00:17:26.794 slat (usec): min=3, max=22728, avg=12.30, stdev=321.71 00:17:26.794 clat (usec): min=137, max=41666, avg=423.81, stdev=2581.73 00:17:26.794 lat (usec): min=143, max=41678, avg=436.11, stdev=2602.84 00:17:26.794 clat percentiles (usec): 00:17:26.794 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 215], 20.00th=[ 229], 00:17:26.794 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 265], 00:17:26.794 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 326], 00:17:26.794 | 99.00th=[ 445], 99.50th=[ 635], 99.90th=[41157], 99.95th=[41157], 00:17:26.794 | 99.99th=[41681] 00:17:26.794 bw ( KiB/s): min= 112, max=15088, per=43.84%, avg=9016.00, stdev=8035.27, samples=5 00:17:26.794 iops : min= 28, max= 3772, avg=2254.00, stdev=2008.82, samples=5 00:17:26.794 lat (usec) : 250=37.62%, 500=61.82%, 750=0.11%, 1000=0.02% 00:17:26.794 lat (msec) : 20=0.02%, 50=0.40% 00:17:26.794 cpu : usr=0.48%, sys=1.98%, ctx=6235, majf=0, minf=1 00:17:26.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.794 issued rwts: total=6233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.794 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2684302: Wed May 15 10:36:42 2024 00:17:26.794 read: IOPS=3576, BW=14.0MiB/s (14.6MB/s)(36.0MiB/2577msec) 00:17:26.794 slat (usec): min=3, max=522, avg= 6.54, stdev= 6.04 00:17:26.794 clat (usec): min=174, max=953, avg=271.98, stdev=54.67 00:17:26.794 lat (usec): min=179, max=1009, avg=278.52, stdev=55.51 00:17:26.794 clat percentiles (usec): 00:17:26.794 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 235], 00:17:26.794 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:17:26.794 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 400], 00:17:26.794 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 515], 99.95th=[ 519], 00:17:26.794 | 99.99th=[ 955] 00:17:26.794 bw ( KiB/s): min=12616, max=16480, per=69.64%, avg=14321.60, stdev=1566.55, samples=5 00:17:26.794 iops : min= 3154, max= 4120, avg=3580.40, stdev=391.64, samples=5 00:17:26.794 lat (usec) : 250=32.51%, 500=67.18%, 750=0.29%, 1000=0.01% 00:17:26.794 cpu : usr=0.74%, sys=4.50%, ctx=9217, majf=0, minf=2 00:17:26.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.794 issued rwts: total=9217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.794 00:17:26.794 Run status group 0 (all jobs): 00:17:26.794 READ: bw=20.1MiB/s (21.1MB/s), 98.4KiB/s-14.0MiB/s (101kB/s-14.6MB/s), io=61.0MiB (64.0MB), run=2577-3037msec 00:17:26.794 00:17:26.794 Disk stats (read/write): 00:17:26.794 nvme0n1: ios=69/0, merge=0/0, ticks=2793/0, in_queue=2793, util=94.52% 00:17:26.794 nvme0n2: ios=90/0, merge=0/0, ticks=2810/0, in_queue=2810, util=95.20% 00:17:26.794 nvme0n3: ios=5962/0, merge=0/0, ticks=2513/0, in_queue=2513, util=95.99% 00:17:26.794 nvme0n4: ios=8353/0, merge=0/0, ticks=2215/0, in_queue=2215, util=96.02% 00:17:26.794 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.794 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:27.053 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.053 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:27.053 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.053 10:36:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:27.310 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.310 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:27.568 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:27.568 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2684072 00:17:27.568 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:27.568 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:27.826 nvmf hotplug test: fio failed as expected 00:17:27.826 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.083 rmmod nvme_tcp 00:17:28.083 rmmod nvme_fabrics 00:17:28.083 rmmod nvme_keyring 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2680642 ']' 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2680642 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 2680642 ']' 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 2680642 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:17:28.083 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:28.084 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2680642 00:17:28.084 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:28.084 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:28.084 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2680642' 00:17:28.084 killing process with pid 2680642 00:17:28.084 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 2680642 00:17:28.084 [2024-05-15 10:36:43.851798] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:28.084 10:36:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 2680642 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.651 10:36:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.552 10:36:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.552 00:17:30.552 real 0m27.135s 00:17:30.552 user 2m35.609s 00:17:30.552 sys 0m7.297s 00:17:30.552 10:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:30.552 10:36:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 ************************************ 00:17:30.552 END TEST nvmf_fio_target 00:17:30.552 ************************************ 00:17:30.553 10:36:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:30.553 10:36:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:30.553 10:36:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:30.553 10:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.812 ************************************ 00:17:30.812 START TEST nvmf_bdevio 00:17:30.812 ************************************ 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:30.812 * Looking for test storage... 00:17:30.812 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.812 10:36:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.813 10:36:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:36.077 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:36.077 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:36.077 Found net devices under 0000:27:00.0: cvl_0_0 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:36.077 Found net devices under 0000:27:00.1: cvl_0_1 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.077 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:17:36.078 00:17:36.078 --- 10.0.0.2 ping statistics --- 00:17:36.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.078 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:17:36.078 00:17:36.078 --- 10.0.0.1 ping statistics --- 00:17:36.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.078 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2689093 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2689093 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 2689093 ']' 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.078 10:36:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:36.078 [2024-05-15 10:36:51.610979] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:17:36.078 [2024-05-15 10:36:51.611103] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.078 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.078 [2024-05-15 10:36:51.737829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.078 [2024-05-15 10:36:51.833875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.078 [2024-05-15 10:36:51.833913] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.078 [2024-05-15 10:36:51.833925] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.078 [2024-05-15 10:36:51.833935] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.078 [2024-05-15 10:36:51.833943] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.078 [2024-05-15 10:36:51.834152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.078 [2024-05-15 10:36:51.834227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.078 [2024-05-15 10:36:51.834320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.078 [2024-05-15 10:36:51.834350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.643 [2024-05-15 10:36:52.346180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.643 Malloc0 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.643 [2024-05-15 10:36:52.414071] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:36.643 [2024-05-15 10:36:52.414342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:36.643 { 00:17:36.643 "params": { 00:17:36.643 "name": "Nvme$subsystem", 00:17:36.643 "trtype": "$TEST_TRANSPORT", 00:17:36.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.643 "adrfam": "ipv4", 00:17:36.643 "trsvcid": "$NVMF_PORT", 00:17:36.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.643 "hdgst": ${hdgst:-false}, 00:17:36.643 "ddgst": ${ddgst:-false} 00:17:36.643 }, 00:17:36.643 "method": "bdev_nvme_attach_controller" 00:17:36.643 } 00:17:36.643 EOF 00:17:36.643 )") 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:36.643 10:36:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:36.643 "params": { 00:17:36.643 "name": "Nvme1", 00:17:36.643 "trtype": "tcp", 00:17:36.643 "traddr": "10.0.0.2", 00:17:36.643 "adrfam": "ipv4", 00:17:36.643 "trsvcid": "4420", 00:17:36.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.643 "hdgst": false, 00:17:36.643 "ddgst": false 00:17:36.643 }, 00:17:36.643 "method": "bdev_nvme_attach_controller" 00:17:36.643 }' 00:17:36.643 [2024-05-15 10:36:52.485551] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:17:36.643 [2024-05-15 10:36:52.485650] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689392 ] 00:17:36.900 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.900 [2024-05-15 10:36:52.594588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.900 [2024-05-15 10:36:52.693338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.900 [2024-05-15 10:36:52.693440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.900 [2024-05-15 10:36:52.693444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.465 I/O targets: 00:17:37.465 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:37.465 00:17:37.465 00:17:37.465 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.465 http://cunit.sourceforge.net/ 00:17:37.465 00:17:37.465 00:17:37.465 Suite: bdevio tests on: Nvme1n1 00:17:37.465 Test: blockdev write read block ...passed 00:17:37.465 Test: blockdev write zeroes read block ...passed 00:17:37.465 Test: blockdev write zeroes read no split ...passed 00:17:37.465 Test: blockdev write zeroes read split ...passed 00:17:37.465 Test: blockdev write zeroes read split partial ...passed 00:17:37.465 Test: blockdev reset ...[2024-05-15 10:36:53.297488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:37.465 [2024-05-15 10:36:53.297599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a4380 (9): Bad file descriptor 00:17:37.721 [2024-05-15 10:36:53.351321] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:37.721 passed 00:17:37.721 Test: blockdev write read 8 blocks ...passed 00:17:37.721 Test: blockdev write read size > 128k ...passed 00:17:37.722 Test: blockdev write read invalid size ...passed 00:17:37.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:37.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:37.722 Test: blockdev write read max offset ...passed 00:17:37.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:37.722 Test: blockdev writev readv 8 blocks ...passed 00:17:37.722 Test: blockdev writev readv 30 x 1block ...passed 00:17:37.979 Test: blockdev writev readv block ...passed 00:17:37.979 Test: blockdev writev readv size > 128k ...passed 00:17:37.979 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:37.979 Test: blockdev comparev and writev ...[2024-05-15 10:36:53.609779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.609820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.609838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.609850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.610146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.610156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.610169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.610177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.610436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.610448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.610462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.610470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.610735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.610745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.610757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.979 [2024-05-15 10:36:53.610765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:37.979 passed 00:17:37.979 Test: blockdev nvme passthru rw ...passed 00:17:37.979 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:36:53.694331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.979 [2024-05-15 10:36:53.694357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.694498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.979 [2024-05-15 10:36:53.694508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.694634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.979 [2024-05-15 10:36:53.694643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:37.979 [2024-05-15 10:36:53.694778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.979 [2024-05-15 10:36:53.694787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:37.979 passed 00:17:37.979 Test: blockdev nvme admin passthru ...passed 00:17:37.979 Test: blockdev copy ...passed 00:17:37.979 00:17:37.979 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.979 suites 1 1 n/a 0 0 00:17:37.979 tests 23 23 23 0 0 00:17:37.979 asserts 152 152 152 0 n/a 00:17:37.979 00:17:37.979 Elapsed time = 1.263 seconds 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.544 rmmod nvme_tcp 00:17:38.544 rmmod nvme_fabrics 00:17:38.544 rmmod nvme_keyring 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2689093 ']' 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2689093 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 2689093 ']' 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 2689093 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2689093 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2689093' 00:17:38.544 killing process with pid 2689093 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 2689093 00:17:38.544 [2024-05-15 10:36:54.232348] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:38.544 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 2689093 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.112 10:36:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.042 10:36:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.042 00:17:41.042 real 0m10.346s 00:17:41.042 user 0m15.920s 00:17:41.042 sys 0m4.231s 00:17:41.042 10:36:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:41.042 10:36:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.042 ************************************ 00:17:41.042 END TEST nvmf_bdevio 00:17:41.042 ************************************ 00:17:41.042 10:36:56 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:41.042 10:36:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:41.042 10:36:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:41.042 10:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.042 ************************************ 00:17:41.042 START TEST nvmf_auth_target 00:17:41.042 ************************************ 00:17:41.042 10:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:41.301 * Looking for test storage... 00:17:41.301 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.301 10:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:46.570 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:46.570 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:46.570 Found net devices under 0000:27:00.0: cvl_0_0 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:46.570 Found net devices under 0000:27:00.1: cvl_0_1 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:17:46.570 00:17:46.570 --- 10.0.0.2 ping statistics --- 00:17:46.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.570 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:17:46.570 00:17:46.570 --- 10.0.0.1 ping statistics --- 00:17:46.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.570 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.570 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2693588 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2693588 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2693588 ']' 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.571 10:37:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=2693899 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=50b797ac010c36f2c5c74898e34d45b8dffb973717411507 00:17:47.506 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zUE 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 50b797ac010c36f2c5c74898e34d45b8dffb973717411507 0 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 50b797ac010c36f2c5c74898e34d45b8dffb973717411507 0 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=50b797ac010c36f2c5c74898e34d45b8dffb973717411507 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zUE 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zUE 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.zUE 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c5b75d7944ea7bccf14281b4b79c6dc9 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.C99 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c5b75d7944ea7bccf14281b4b79c6dc9 1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c5b75d7944ea7bccf14281b4b79c6dc9 1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c5b75d7944ea7bccf14281b4b79c6dc9 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.C99 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.C99 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.C99 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5f4e9661d6b31923020cfc06d172023928f3114be9e5ac4d 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Mue 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5f4e9661d6b31923020cfc06d172023928f3114be9e5ac4d 2 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5f4e9661d6b31923020cfc06d172023928f3114be9e5ac4d 2 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5f4e9661d6b31923020cfc06d172023928f3114be9e5ac4d 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Mue 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Mue 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.Mue 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4441bc13dfdec1c83180f13568571dcedf324aa339cb8dbfe203e612c5e575b0 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.n9z 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4441bc13dfdec1c83180f13568571dcedf324aa339cb8dbfe203e612c5e575b0 3 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4441bc13dfdec1c83180f13568571dcedf324aa339cb8dbfe203e612c5e575b0 3 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4441bc13dfdec1c83180f13568571dcedf324aa339cb8dbfe203e612c5e575b0 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.n9z 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.n9z 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.n9z 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 2693588 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2693588 ']' 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:47.507 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 2693899 /var/tmp/host.sock 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2693899 ']' 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:47.768 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zUE 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zUE 00:17:48.028 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zUE 00:17:48.287 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:48.287 10:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.C99 00:17:48.287 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.287 10:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.287 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.287 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.C99 00:17:48.287 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.C99 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Mue 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Mue 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Mue 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.n9z 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.n9z 00:17:48.545 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.n9z 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:48.803 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:49.061 00:17:49.061 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:49.061 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:49.061 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.061 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:49.321 { 00:17:49.321 "cntlid": 1, 00:17:49.321 "qid": 0, 00:17:49.321 "state": "enabled", 00:17:49.321 "listen_address": { 00:17:49.321 "trtype": "TCP", 00:17:49.321 "adrfam": "IPv4", 00:17:49.321 "traddr": "10.0.0.2", 00:17:49.321 "trsvcid": "4420" 00:17:49.321 }, 00:17:49.321 "peer_address": { 00:17:49.321 "trtype": "TCP", 00:17:49.321 "adrfam": "IPv4", 00:17:49.321 "traddr": "10.0.0.1", 00:17:49.321 "trsvcid": "43468" 00:17:49.321 }, 00:17:49.321 "auth": { 00:17:49.321 "state": "completed", 00:17:49.321 "digest": "sha256", 00:17:49.321 "dhgroup": "null" 00:17:49.321 } 00:17:49.321 } 00:17:49.321 ]' 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.321 10:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:49.321 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:49.321 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:49.321 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.321 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.321 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.580 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:50.148 10:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:50.405 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.405 10:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:50.663 { 00:17:50.663 "cntlid": 3, 00:17:50.663 "qid": 0, 00:17:50.663 "state": "enabled", 00:17:50.663 "listen_address": { 00:17:50.663 "trtype": "TCP", 00:17:50.663 "adrfam": "IPv4", 00:17:50.663 "traddr": "10.0.0.2", 00:17:50.663 "trsvcid": "4420" 00:17:50.663 }, 00:17:50.663 "peer_address": { 00:17:50.663 "trtype": "TCP", 00:17:50.663 "adrfam": "IPv4", 00:17:50.663 "traddr": "10.0.0.1", 00:17:50.663 "trsvcid": "51942" 00:17:50.663 }, 00:17:50.663 "auth": { 00:17:50.663 "state": "completed", 00:17:50.663 "digest": "sha256", 00:17:50.663 "dhgroup": "null" 00:17:50.663 } 00:17:50.663 } 00:17:50.663 ]' 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.663 10:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:17:51.231 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.231 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:51.231 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.231 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.490 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.491 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.491 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.749 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.749 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:52.007 { 00:17:52.007 "cntlid": 5, 00:17:52.007 "qid": 0, 00:17:52.007 "state": "enabled", 00:17:52.007 "listen_address": { 00:17:52.007 "trtype": "TCP", 00:17:52.007 "adrfam": "IPv4", 00:17:52.007 "traddr": "10.0.0.2", 00:17:52.007 "trsvcid": "4420" 00:17:52.007 }, 00:17:52.007 "peer_address": { 00:17:52.007 "trtype": "TCP", 00:17:52.007 "adrfam": "IPv4", 00:17:52.007 "traddr": "10.0.0.1", 00:17:52.007 "trsvcid": "51960" 00:17:52.007 }, 00:17:52.007 "auth": { 00:17:52.007 "state": "completed", 00:17:52.007 "digest": "sha256", 00:17:52.007 "dhgroup": "null" 00:17:52.007 } 00:17:52.007 } 00:17:52.007 ]' 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.007 10:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.574 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.835 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.093 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.093 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:53.093 { 00:17:53.093 "cntlid": 7, 00:17:53.093 "qid": 0, 00:17:53.093 "state": "enabled", 00:17:53.093 "listen_address": { 00:17:53.094 "trtype": "TCP", 00:17:53.094 "adrfam": "IPv4", 00:17:53.094 "traddr": "10.0.0.2", 00:17:53.094 "trsvcid": "4420" 00:17:53.094 }, 00:17:53.094 "peer_address": { 00:17:53.094 "trtype": "TCP", 00:17:53.094 "adrfam": "IPv4", 00:17:53.094 "traddr": "10.0.0.1", 00:17:53.094 "trsvcid": "51978" 00:17:53.094 }, 00:17:53.094 "auth": { 00:17:53.094 "state": "completed", 00:17:53.094 "digest": "sha256", 00:17:53.094 "dhgroup": "null" 00:17:53.094 } 00:17:53.094 } 00:17:53.094 ]' 00:17:53.094 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:53.352 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.352 10:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:53.352 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:53.352 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:53.352 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.352 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.353 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.353 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.918 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.176 10:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.442 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.442 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:54.442 { 00:17:54.442 "cntlid": 9, 00:17:54.442 "qid": 0, 00:17:54.442 "state": "enabled", 00:17:54.443 "listen_address": { 00:17:54.443 "trtype": "TCP", 00:17:54.443 "adrfam": "IPv4", 00:17:54.443 "traddr": "10.0.0.2", 00:17:54.443 "trsvcid": "4420" 00:17:54.443 }, 00:17:54.443 "peer_address": { 00:17:54.443 "trtype": "TCP", 00:17:54.443 "adrfam": "IPv4", 00:17:54.443 "traddr": "10.0.0.1", 00:17:54.443 "trsvcid": "52006" 00:17:54.443 }, 00:17:54.443 "auth": { 00:17:54.443 "state": "completed", 00:17:54.443 "digest": "sha256", 00:17:54.443 "dhgroup": "ffdhe2048" 00:17:54.443 } 00:17:54.443 } 00:17:54.443 ]' 00:17:54.443 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:54.443 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.443 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:54.709 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.709 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:54.709 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.709 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.709 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.709 10:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.277 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:55.535 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:55.793 00:17:55.793 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:55.793 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.793 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:55.793 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:55.794 { 00:17:55.794 "cntlid": 11, 00:17:55.794 "qid": 0, 00:17:55.794 "state": "enabled", 00:17:55.794 "listen_address": { 00:17:55.794 "trtype": "TCP", 00:17:55.794 "adrfam": "IPv4", 00:17:55.794 "traddr": "10.0.0.2", 00:17:55.794 "trsvcid": "4420" 00:17:55.794 }, 00:17:55.794 "peer_address": { 00:17:55.794 "trtype": "TCP", 00:17:55.794 "adrfam": "IPv4", 00:17:55.794 "traddr": "10.0.0.1", 00:17:55.794 "trsvcid": "52038" 00:17:55.794 }, 00:17:55.794 "auth": { 00:17:55.794 "state": "completed", 00:17:55.794 "digest": "sha256", 00:17:55.794 "dhgroup": "ffdhe2048" 00:17:55.794 } 00:17:55.794 } 00:17:55.794 ]' 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:55.794 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.053 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:56.053 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.053 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.053 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.053 10:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.626 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:56.954 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:56.954 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:57.213 { 00:17:57.213 "cntlid": 13, 00:17:57.213 "qid": 0, 00:17:57.213 "state": "enabled", 00:17:57.213 "listen_address": { 00:17:57.213 "trtype": "TCP", 00:17:57.213 "adrfam": "IPv4", 00:17:57.213 "traddr": "10.0.0.2", 00:17:57.213 "trsvcid": "4420" 00:17:57.213 }, 00:17:57.213 "peer_address": { 00:17:57.213 "trtype": "TCP", 00:17:57.213 "adrfam": "IPv4", 00:17:57.213 "traddr": "10.0.0.1", 00:17:57.213 "trsvcid": "52070" 00:17:57.213 }, 00:17:57.213 "auth": { 00:17:57.213 "state": "completed", 00:17:57.213 "digest": "sha256", 00:17:57.213 "dhgroup": "ffdhe2048" 00:17:57.213 } 00:17:57.213 } 00:17:57.213 ]' 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.213 10:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:57.213 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.213 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:57.213 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.213 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.213 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.470 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.039 10:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.299 10:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.299 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.299 10:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.299 00:17:58.299 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:58.299 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.299 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:58.558 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:58.559 { 00:17:58.559 "cntlid": 15, 00:17:58.559 "qid": 0, 00:17:58.559 "state": "enabled", 00:17:58.559 "listen_address": { 00:17:58.559 "trtype": "TCP", 00:17:58.559 "adrfam": "IPv4", 00:17:58.559 "traddr": "10.0.0.2", 00:17:58.559 "trsvcid": "4420" 00:17:58.559 }, 00:17:58.559 "peer_address": { 00:17:58.559 "trtype": "TCP", 00:17:58.559 "adrfam": "IPv4", 00:17:58.559 "traddr": "10.0.0.1", 00:17:58.559 "trsvcid": "52106" 00:17:58.559 }, 00:17:58.559 "auth": { 00:17:58.559 "state": "completed", 00:17:58.559 "digest": "sha256", 00:17:58.559 "dhgroup": "ffdhe2048" 00:17:58.559 } 00:17:58.559 } 00:17:58.559 ]' 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.559 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.817 10:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.465 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:59.466 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:59.724 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.724 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:59.984 { 00:17:59.984 "cntlid": 17, 00:17:59.984 "qid": 0, 00:17:59.984 "state": "enabled", 00:17:59.984 "listen_address": { 00:17:59.984 "trtype": "TCP", 00:17:59.984 "adrfam": "IPv4", 00:17:59.984 "traddr": "10.0.0.2", 00:17:59.984 "trsvcid": "4420" 00:17:59.984 }, 00:17:59.984 "peer_address": { 00:17:59.984 "trtype": "TCP", 00:17:59.984 "adrfam": "IPv4", 00:17:59.984 "traddr": "10.0.0.1", 00:17:59.984 "trsvcid": "52130" 00:17:59.984 }, 00:17:59.984 "auth": { 00:17:59.984 "state": "completed", 00:17:59.984 "digest": "sha256", 00:17:59.984 "dhgroup": "ffdhe3072" 00:17:59.984 } 00:17:59.984 } 00:17:59.984 ]' 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.984 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.245 10:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:00.811 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:01.068 00:18:01.068 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:01.068 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:01.068 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.069 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.069 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.069 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.069 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.069 10:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.069 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:01.069 { 00:18:01.069 "cntlid": 19, 00:18:01.069 "qid": 0, 00:18:01.069 "state": "enabled", 00:18:01.069 "listen_address": { 00:18:01.069 "trtype": "TCP", 00:18:01.069 "adrfam": "IPv4", 00:18:01.069 "traddr": "10.0.0.2", 00:18:01.069 "trsvcid": "4420" 00:18:01.069 }, 00:18:01.069 "peer_address": { 00:18:01.069 "trtype": "TCP", 00:18:01.069 "adrfam": "IPv4", 00:18:01.069 "traddr": "10.0.0.1", 00:18:01.069 "trsvcid": "34752" 00:18:01.069 }, 00:18:01.069 "auth": { 00:18:01.069 "state": "completed", 00:18:01.069 "digest": "sha256", 00:18:01.069 "dhgroup": "ffdhe3072" 00:18:01.069 } 00:18:01.069 } 00:18:01.069 ]' 00:18:01.327 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:01.327 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.327 10:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:01.327 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.327 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:01.327 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.327 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.327 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.327 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.898 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:02.159 10:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:02.418 00:18:02.418 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:02.418 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.418 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:02.675 { 00:18:02.675 "cntlid": 21, 00:18:02.675 "qid": 0, 00:18:02.675 "state": "enabled", 00:18:02.675 "listen_address": { 00:18:02.675 "trtype": "TCP", 00:18:02.675 "adrfam": "IPv4", 00:18:02.675 "traddr": "10.0.0.2", 00:18:02.675 "trsvcid": "4420" 00:18:02.675 }, 00:18:02.675 "peer_address": { 00:18:02.675 "trtype": "TCP", 00:18:02.675 "adrfam": "IPv4", 00:18:02.675 "traddr": "10.0.0.1", 00:18:02.675 "trsvcid": "34774" 00:18:02.675 }, 00:18:02.675 "auth": { 00:18:02.675 "state": "completed", 00:18:02.675 "digest": "sha256", 00:18:02.675 "dhgroup": "ffdhe3072" 00:18:02.675 } 00:18:02.675 } 00:18:02.675 ]' 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.675 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.932 10:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.501 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.761 00:18:03.761 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:03.761 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:03.761 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:04.020 { 00:18:04.020 "cntlid": 23, 00:18:04.020 "qid": 0, 00:18:04.020 "state": "enabled", 00:18:04.020 "listen_address": { 00:18:04.020 "trtype": "TCP", 00:18:04.020 "adrfam": "IPv4", 00:18:04.020 "traddr": "10.0.0.2", 00:18:04.020 "trsvcid": "4420" 00:18:04.020 }, 00:18:04.020 "peer_address": { 00:18:04.020 "trtype": "TCP", 00:18:04.020 "adrfam": "IPv4", 00:18:04.020 "traddr": "10.0.0.1", 00:18:04.020 "trsvcid": "34794" 00:18:04.020 }, 00:18:04.020 "auth": { 00:18:04.020 "state": "completed", 00:18:04.020 "digest": "sha256", 00:18:04.020 "dhgroup": "ffdhe3072" 00:18:04.020 } 00:18:04.020 } 00:18:04.020 ]' 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.020 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.278 10:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:04.845 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:05.105 00:18:05.105 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:05.105 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:05.105 10:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:05.363 { 00:18:05.363 "cntlid": 25, 00:18:05.363 "qid": 0, 00:18:05.363 "state": "enabled", 00:18:05.363 "listen_address": { 00:18:05.363 "trtype": "TCP", 00:18:05.363 "adrfam": "IPv4", 00:18:05.363 "traddr": "10.0.0.2", 00:18:05.363 "trsvcid": "4420" 00:18:05.363 }, 00:18:05.363 "peer_address": { 00:18:05.363 "trtype": "TCP", 00:18:05.363 "adrfam": "IPv4", 00:18:05.363 "traddr": "10.0.0.1", 00:18:05.363 "trsvcid": "34818" 00:18:05.363 }, 00:18:05.363 "auth": { 00:18:05.363 "state": "completed", 00:18:05.363 "digest": "sha256", 00:18:05.363 "dhgroup": "ffdhe4096" 00:18:05.363 } 00:18:05.363 } 00:18:05.363 ]' 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.363 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.622 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.191 10:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:06.452 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:06.452 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:06.710 { 00:18:06.710 "cntlid": 27, 00:18:06.710 "qid": 0, 00:18:06.710 "state": "enabled", 00:18:06.710 "listen_address": { 00:18:06.710 "trtype": "TCP", 00:18:06.710 "adrfam": "IPv4", 00:18:06.710 "traddr": "10.0.0.2", 00:18:06.710 "trsvcid": "4420" 00:18:06.710 }, 00:18:06.710 "peer_address": { 00:18:06.710 "trtype": "TCP", 00:18:06.710 "adrfam": "IPv4", 00:18:06.710 "traddr": "10.0.0.1", 00:18:06.710 "trsvcid": "34844" 00:18:06.710 }, 00:18:06.710 "auth": { 00:18:06.710 "state": "completed", 00:18:06.710 "digest": "sha256", 00:18:06.710 "dhgroup": "ffdhe4096" 00:18:06.710 } 00:18:06.710 } 00:18:06.710 ]' 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.710 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.969 10:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.539 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:07.799 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:07.799 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:08.057 { 00:18:08.057 "cntlid": 29, 00:18:08.057 "qid": 0, 00:18:08.057 "state": "enabled", 00:18:08.057 "listen_address": { 00:18:08.057 "trtype": "TCP", 00:18:08.057 "adrfam": "IPv4", 00:18:08.057 "traddr": "10.0.0.2", 00:18:08.057 "trsvcid": "4420" 00:18:08.057 }, 00:18:08.057 "peer_address": { 00:18:08.057 "trtype": "TCP", 00:18:08.057 "adrfam": "IPv4", 00:18:08.057 "traddr": "10.0.0.1", 00:18:08.057 "trsvcid": "34876" 00:18:08.057 }, 00:18:08.057 "auth": { 00:18:08.057 "state": "completed", 00:18:08.057 "digest": "sha256", 00:18:08.057 "dhgroup": "ffdhe4096" 00:18:08.057 } 00:18:08.057 } 00:18:08.057 ]' 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.057 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:08.315 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.315 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.315 10:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.315 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.880 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.139 10:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.399 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:09.399 { 00:18:09.399 "cntlid": 31, 00:18:09.399 "qid": 0, 00:18:09.399 "state": "enabled", 00:18:09.399 "listen_address": { 00:18:09.399 "trtype": "TCP", 00:18:09.399 "adrfam": "IPv4", 00:18:09.399 "traddr": "10.0.0.2", 00:18:09.399 "trsvcid": "4420" 00:18:09.399 }, 00:18:09.399 "peer_address": { 00:18:09.399 "trtype": "TCP", 00:18:09.399 "adrfam": "IPv4", 00:18:09.399 "traddr": "10.0.0.1", 00:18:09.399 "trsvcid": "34914" 00:18:09.399 }, 00:18:09.399 "auth": { 00:18:09.399 "state": "completed", 00:18:09.399 "digest": "sha256", 00:18:09.399 "dhgroup": "ffdhe4096" 00:18:09.399 } 00:18:09.399 } 00:18:09.399 ]' 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.399 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:09.658 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.658 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:09.658 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.658 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.658 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.658 10:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.223 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:10.482 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:10.741 00:18:10.741 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:10.741 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:10.741 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:11.002 { 00:18:11.002 "cntlid": 33, 00:18:11.002 "qid": 0, 00:18:11.002 "state": "enabled", 00:18:11.002 "listen_address": { 00:18:11.002 "trtype": "TCP", 00:18:11.002 "adrfam": "IPv4", 00:18:11.002 "traddr": "10.0.0.2", 00:18:11.002 "trsvcid": "4420" 00:18:11.002 }, 00:18:11.002 "peer_address": { 00:18:11.002 "trtype": "TCP", 00:18:11.002 "adrfam": "IPv4", 00:18:11.002 "traddr": "10.0.0.1", 00:18:11.002 "trsvcid": "49072" 00:18:11.002 }, 00:18:11.002 "auth": { 00:18:11.002 "state": "completed", 00:18:11.002 "digest": "sha256", 00:18:11.002 "dhgroup": "ffdhe6144" 00:18:11.002 } 00:18:11.002 } 00:18:11.002 ]' 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.002 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.261 10:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:11.827 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:12.084 00:18:12.342 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:12.342 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:12.342 10:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:12.342 { 00:18:12.342 "cntlid": 35, 00:18:12.342 "qid": 0, 00:18:12.342 "state": "enabled", 00:18:12.342 "listen_address": { 00:18:12.342 "trtype": "TCP", 00:18:12.342 "adrfam": "IPv4", 00:18:12.342 "traddr": "10.0.0.2", 00:18:12.342 "trsvcid": "4420" 00:18:12.342 }, 00:18:12.342 "peer_address": { 00:18:12.342 "trtype": "TCP", 00:18:12.342 "adrfam": "IPv4", 00:18:12.342 "traddr": "10.0.0.1", 00:18:12.342 "trsvcid": "49100" 00:18:12.342 }, 00:18:12.342 "auth": { 00:18:12.342 "state": "completed", 00:18:12.342 "digest": "sha256", 00:18:12.342 "dhgroup": "ffdhe6144" 00:18:12.342 } 00:18:12.342 } 00:18:12.342 ]' 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.342 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:12.601 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.601 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.601 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.601 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.169 10:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:13.427 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:13.687 00:18:13.687 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:13.687 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:13.687 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:13.946 { 00:18:13.946 "cntlid": 37, 00:18:13.946 "qid": 0, 00:18:13.946 "state": "enabled", 00:18:13.946 "listen_address": { 00:18:13.946 "trtype": "TCP", 00:18:13.946 "adrfam": "IPv4", 00:18:13.946 "traddr": "10.0.0.2", 00:18:13.946 "trsvcid": "4420" 00:18:13.946 }, 00:18:13.946 "peer_address": { 00:18:13.946 "trtype": "TCP", 00:18:13.946 "adrfam": "IPv4", 00:18:13.946 "traddr": "10.0.0.1", 00:18:13.946 "trsvcid": "49116" 00:18:13.946 }, 00:18:13.946 "auth": { 00:18:13.946 "state": "completed", 00:18:13.946 "digest": "sha256", 00:18:13.946 "dhgroup": "ffdhe6144" 00:18:13.946 } 00:18:13.946 } 00:18:13.946 ]' 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.946 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.205 10:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.775 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.034 00:18:15.034 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:15.034 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.034 10:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:15.292 { 00:18:15.292 "cntlid": 39, 00:18:15.292 "qid": 0, 00:18:15.292 "state": "enabled", 00:18:15.292 "listen_address": { 00:18:15.292 "trtype": "TCP", 00:18:15.292 "adrfam": "IPv4", 00:18:15.292 "traddr": "10.0.0.2", 00:18:15.292 "trsvcid": "4420" 00:18:15.292 }, 00:18:15.292 "peer_address": { 00:18:15.292 "trtype": "TCP", 00:18:15.292 "adrfam": "IPv4", 00:18:15.292 "traddr": "10.0.0.1", 00:18:15.292 "trsvcid": "49144" 00:18:15.292 }, 00:18:15.292 "auth": { 00:18:15.292 "state": "completed", 00:18:15.292 "digest": "sha256", 00:18:15.292 "dhgroup": "ffdhe6144" 00:18:15.292 } 00:18:15.292 } 00:18:15.292 ]' 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.292 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.549 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.118 10:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:16.378 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:16.637 00:18:16.897 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:16.897 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:16.897 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.897 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:16.898 { 00:18:16.898 "cntlid": 41, 00:18:16.898 "qid": 0, 00:18:16.898 "state": "enabled", 00:18:16.898 "listen_address": { 00:18:16.898 "trtype": "TCP", 00:18:16.898 "adrfam": "IPv4", 00:18:16.898 "traddr": "10.0.0.2", 00:18:16.898 "trsvcid": "4420" 00:18:16.898 }, 00:18:16.898 "peer_address": { 00:18:16.898 "trtype": "TCP", 00:18:16.898 "adrfam": "IPv4", 00:18:16.898 "traddr": "10.0.0.1", 00:18:16.898 "trsvcid": "49178" 00:18:16.898 }, 00:18:16.898 "auth": { 00:18:16.898 "state": "completed", 00:18:16.898 "digest": "sha256", 00:18:16.898 "dhgroup": "ffdhe8192" 00:18:16.898 } 00:18:16.898 } 00:18:16.898 ]' 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.898 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:17.158 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.158 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.158 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.158 10:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.727 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:17.987 10:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:18.247 00:18:18.247 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:18.247 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:18.247 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:18.507 { 00:18:18.507 "cntlid": 43, 00:18:18.507 "qid": 0, 00:18:18.507 "state": "enabled", 00:18:18.507 "listen_address": { 00:18:18.507 "trtype": "TCP", 00:18:18.507 "adrfam": "IPv4", 00:18:18.507 "traddr": "10.0.0.2", 00:18:18.507 "trsvcid": "4420" 00:18:18.507 }, 00:18:18.507 "peer_address": { 00:18:18.507 "trtype": "TCP", 00:18:18.507 "adrfam": "IPv4", 00:18:18.507 "traddr": "10.0.0.1", 00:18:18.507 "trsvcid": "49208" 00:18:18.507 }, 00:18:18.507 "auth": { 00:18:18.507 "state": "completed", 00:18:18.507 "digest": "sha256", 00:18:18.507 "dhgroup": "ffdhe8192" 00:18:18.507 } 00:18:18.507 } 00:18:18.507 ]' 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.507 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.768 10:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.338 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:19.597 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:19.855 00:18:19.855 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:19.855 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:19.855 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:20.115 { 00:18:20.115 "cntlid": 45, 00:18:20.115 "qid": 0, 00:18:20.115 "state": "enabled", 00:18:20.115 "listen_address": { 00:18:20.115 "trtype": "TCP", 00:18:20.115 "adrfam": "IPv4", 00:18:20.115 "traddr": "10.0.0.2", 00:18:20.115 "trsvcid": "4420" 00:18:20.115 }, 00:18:20.115 "peer_address": { 00:18:20.115 "trtype": "TCP", 00:18:20.115 "adrfam": "IPv4", 00:18:20.115 "traddr": "10.0.0.1", 00:18:20.115 "trsvcid": "49242" 00:18:20.115 }, 00:18:20.115 "auth": { 00:18:20.115 "state": "completed", 00:18:20.115 "digest": "sha256", 00:18:20.115 "dhgroup": "ffdhe8192" 00:18:20.115 } 00:18:20.115 } 00:18:20.115 ]' 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.115 10:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.375 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.946 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.205 10:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.462 00:18:21.462 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:21.462 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:21.462 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:21.721 { 00:18:21.721 "cntlid": 47, 00:18:21.721 "qid": 0, 00:18:21.721 "state": "enabled", 00:18:21.721 "listen_address": { 00:18:21.721 "trtype": "TCP", 00:18:21.721 "adrfam": "IPv4", 00:18:21.721 "traddr": "10.0.0.2", 00:18:21.721 "trsvcid": "4420" 00:18:21.721 }, 00:18:21.721 "peer_address": { 00:18:21.721 "trtype": "TCP", 00:18:21.721 "adrfam": "IPv4", 00:18:21.721 "traddr": "10.0.0.1", 00:18:21.721 "trsvcid": "41554" 00:18:21.721 }, 00:18:21.721 "auth": { 00:18:21.721 "state": "completed", 00:18:21.721 "digest": "sha256", 00:18:21.721 "dhgroup": "ffdhe8192" 00:18:21.721 } 00:18:21.721 } 00:18:21.721 ]' 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.721 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.979 10:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.549 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.808 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.808 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:22.808 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:22.808 00:18:22.808 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:22.808 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.808 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:23.066 { 00:18:23.066 "cntlid": 49, 00:18:23.066 "qid": 0, 00:18:23.066 "state": "enabled", 00:18:23.066 "listen_address": { 00:18:23.066 "trtype": "TCP", 00:18:23.066 "adrfam": "IPv4", 00:18:23.066 "traddr": "10.0.0.2", 00:18:23.066 "trsvcid": "4420" 00:18:23.066 }, 00:18:23.066 "peer_address": { 00:18:23.066 "trtype": "TCP", 00:18:23.066 "adrfam": "IPv4", 00:18:23.066 "traddr": "10.0.0.1", 00:18:23.066 "trsvcid": "41586" 00:18:23.066 }, 00:18:23.066 "auth": { 00:18:23.066 "state": "completed", 00:18:23.066 "digest": "sha384", 00:18:23.066 "dhgroup": "null" 00:18:23.066 } 00:18:23.066 } 00:18:23.066 ]' 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.066 10:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.323 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:23.888 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:24.153 00:18:24.153 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:24.153 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:24.153 10:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:24.507 { 00:18:24.507 "cntlid": 51, 00:18:24.507 "qid": 0, 00:18:24.507 "state": "enabled", 00:18:24.507 "listen_address": { 00:18:24.507 "trtype": "TCP", 00:18:24.507 "adrfam": "IPv4", 00:18:24.507 "traddr": "10.0.0.2", 00:18:24.507 "trsvcid": "4420" 00:18:24.507 }, 00:18:24.507 "peer_address": { 00:18:24.507 "trtype": "TCP", 00:18:24.507 "adrfam": "IPv4", 00:18:24.507 "traddr": "10.0.0.1", 00:18:24.507 "trsvcid": "41620" 00:18:24.507 }, 00:18:24.507 "auth": { 00:18:24.507 "state": "completed", 00:18:24.507 "digest": "sha384", 00:18:24.507 "dhgroup": "null" 00:18:24.507 } 00:18:24.507 } 00:18:24.507 ]' 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.507 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.767 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.333 10:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.333 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:18:25.333 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:25.333 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.333 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.333 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.334 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:25.334 10:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.334 10:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.334 10:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.334 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:25.334 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:25.592 00:18:25.592 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:25.592 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:25.592 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.592 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.592 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:25.593 { 00:18:25.593 "cntlid": 53, 00:18:25.593 "qid": 0, 00:18:25.593 "state": "enabled", 00:18:25.593 "listen_address": { 00:18:25.593 "trtype": "TCP", 00:18:25.593 "adrfam": "IPv4", 00:18:25.593 "traddr": "10.0.0.2", 00:18:25.593 "trsvcid": "4420" 00:18:25.593 }, 00:18:25.593 "peer_address": { 00:18:25.593 "trtype": "TCP", 00:18:25.593 "adrfam": "IPv4", 00:18:25.593 "traddr": "10.0.0.1", 00:18:25.593 "trsvcid": "41646" 00:18:25.593 }, 00:18:25.593 "auth": { 00:18:25.593 "state": "completed", 00:18:25.593 "digest": "sha384", 00:18:25.593 "dhgroup": "null" 00:18:25.593 } 00:18:25.593 } 00:18:25.593 ]' 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.593 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:25.850 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:25.850 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:25.850 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.850 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.850 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.851 10:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.418 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.676 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.934 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:26.934 { 00:18:26.934 "cntlid": 55, 00:18:26.934 "qid": 0, 00:18:26.934 "state": "enabled", 00:18:26.934 "listen_address": { 00:18:26.934 "trtype": "TCP", 00:18:26.934 "adrfam": "IPv4", 00:18:26.934 "traddr": "10.0.0.2", 00:18:26.934 "trsvcid": "4420" 00:18:26.934 }, 00:18:26.934 "peer_address": { 00:18:26.934 "trtype": "TCP", 00:18:26.934 "adrfam": "IPv4", 00:18:26.934 "traddr": "10.0.0.1", 00:18:26.934 "trsvcid": "41676" 00:18:26.934 }, 00:18:26.934 "auth": { 00:18:26.934 "state": "completed", 00:18:26.934 "digest": "sha384", 00:18:26.934 "dhgroup": "null" 00:18:26.934 } 00:18:26.934 } 00:18:26.934 ]' 00:18:26.934 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.191 10:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.191 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.758 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.018 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.278 00:18:28.278 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:28.278 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.278 10:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:28.278 { 00:18:28.278 "cntlid": 57, 00:18:28.278 "qid": 0, 00:18:28.278 "state": "enabled", 00:18:28.278 "listen_address": { 00:18:28.278 "trtype": "TCP", 00:18:28.278 "adrfam": "IPv4", 00:18:28.278 "traddr": "10.0.0.2", 00:18:28.278 "trsvcid": "4420" 00:18:28.278 }, 00:18:28.278 "peer_address": { 00:18:28.278 "trtype": "TCP", 00:18:28.278 "adrfam": "IPv4", 00:18:28.278 "traddr": "10.0.0.1", 00:18:28.278 "trsvcid": "41698" 00:18:28.278 }, 00:18:28.278 "auth": { 00:18:28.278 "state": "completed", 00:18:28.278 "digest": "sha384", 00:18:28.278 "dhgroup": "ffdhe2048" 00:18:28.278 } 00:18:28.278 } 00:18:28.278 ]' 00:18:28.278 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.537 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.102 10:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:29.361 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:29.621 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:29.621 { 00:18:29.621 "cntlid": 59, 00:18:29.621 "qid": 0, 00:18:29.621 "state": "enabled", 00:18:29.621 "listen_address": { 00:18:29.621 "trtype": "TCP", 00:18:29.621 "adrfam": "IPv4", 00:18:29.621 "traddr": "10.0.0.2", 00:18:29.621 "trsvcid": "4420" 00:18:29.621 }, 00:18:29.621 "peer_address": { 00:18:29.621 "trtype": "TCP", 00:18:29.621 "adrfam": "IPv4", 00:18:29.621 "traddr": "10.0.0.1", 00:18:29.621 "trsvcid": "41742" 00:18:29.621 }, 00:18:29.621 "auth": { 00:18:29.621 "state": "completed", 00:18:29.621 "digest": "sha384", 00:18:29.621 "dhgroup": "ffdhe2048" 00:18:29.621 } 00:18:29.621 } 00:18:29.621 ]' 00:18:29.621 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.881 10:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.446 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.703 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.962 00:18:30.962 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:30.962 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:30.962 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:30.963 { 00:18:30.963 "cntlid": 61, 00:18:30.963 "qid": 0, 00:18:30.963 "state": "enabled", 00:18:30.963 "listen_address": { 00:18:30.963 "trtype": "TCP", 00:18:30.963 "adrfam": "IPv4", 00:18:30.963 "traddr": "10.0.0.2", 00:18:30.963 "trsvcid": "4420" 00:18:30.963 }, 00:18:30.963 "peer_address": { 00:18:30.963 "trtype": "TCP", 00:18:30.963 "adrfam": "IPv4", 00:18:30.963 "traddr": "10.0.0.1", 00:18:30.963 "trsvcid": "45824" 00:18:30.963 }, 00:18:30.963 "auth": { 00:18:30.963 "state": "completed", 00:18:30.963 "digest": "sha384", 00:18:30.963 "dhgroup": "ffdhe2048" 00:18:30.963 } 00:18:30.963 } 00:18:30.963 ]' 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.963 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:31.223 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.223 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:31.223 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.223 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.223 10:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.223 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.793 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.050 10:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.308 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:32.308 { 00:18:32.308 "cntlid": 63, 00:18:32.308 "qid": 0, 00:18:32.308 "state": "enabled", 00:18:32.308 "listen_address": { 00:18:32.308 "trtype": "TCP", 00:18:32.308 "adrfam": "IPv4", 00:18:32.308 "traddr": "10.0.0.2", 00:18:32.308 "trsvcid": "4420" 00:18:32.308 }, 00:18:32.308 "peer_address": { 00:18:32.308 "trtype": "TCP", 00:18:32.308 "adrfam": "IPv4", 00:18:32.308 "traddr": "10.0.0.1", 00:18:32.308 "trsvcid": "45854" 00:18:32.308 }, 00:18:32.308 "auth": { 00:18:32.308 "state": "completed", 00:18:32.308 "digest": "sha384", 00:18:32.308 "dhgroup": "ffdhe2048" 00:18:32.308 } 00:18:32.308 } 00:18:32.308 ]' 00:18:32.308 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:32.565 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.566 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.135 10:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:33.395 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:33.654 00:18:33.654 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:33.654 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:33.654 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.654 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.654 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.654 10:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.655 10:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:33.912 { 00:18:33.912 "cntlid": 65, 00:18:33.912 "qid": 0, 00:18:33.912 "state": "enabled", 00:18:33.912 "listen_address": { 00:18:33.912 "trtype": "TCP", 00:18:33.912 "adrfam": "IPv4", 00:18:33.912 "traddr": "10.0.0.2", 00:18:33.912 "trsvcid": "4420" 00:18:33.912 }, 00:18:33.912 "peer_address": { 00:18:33.912 "trtype": "TCP", 00:18:33.912 "adrfam": "IPv4", 00:18:33.912 "traddr": "10.0.0.1", 00:18:33.912 "trsvcid": "45882" 00:18:33.912 }, 00:18:33.912 "auth": { 00:18:33.912 "state": "completed", 00:18:33.912 "digest": "sha384", 00:18:33.912 "dhgroup": "ffdhe3072" 00:18:33.912 } 00:18:33.912 } 00:18:33.912 ]' 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.912 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.913 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.913 10:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:34.479 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.480 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:34.480 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.480 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:34.740 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:35.000 00:18:35.000 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:35.000 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.000 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:35.259 { 00:18:35.259 "cntlid": 67, 00:18:35.259 "qid": 0, 00:18:35.259 "state": "enabled", 00:18:35.259 "listen_address": { 00:18:35.259 "trtype": "TCP", 00:18:35.259 "adrfam": "IPv4", 00:18:35.259 "traddr": "10.0.0.2", 00:18:35.259 "trsvcid": "4420" 00:18:35.259 }, 00:18:35.259 "peer_address": { 00:18:35.259 "trtype": "TCP", 00:18:35.259 "adrfam": "IPv4", 00:18:35.259 "traddr": "10.0.0.1", 00:18:35.259 "trsvcid": "45914" 00:18:35.259 }, 00:18:35.259 "auth": { 00:18:35.259 "state": "completed", 00:18:35.259 "digest": "sha384", 00:18:35.259 "dhgroup": "ffdhe3072" 00:18:35.259 } 00:18:35.259 } 00:18:35.259 ]' 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.259 10:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:35.259 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.259 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.259 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.518 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.083 10:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.343 00:18:36.343 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:36.343 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:36.343 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:36.603 { 00:18:36.603 "cntlid": 69, 00:18:36.603 "qid": 0, 00:18:36.603 "state": "enabled", 00:18:36.603 "listen_address": { 00:18:36.603 "trtype": "TCP", 00:18:36.603 "adrfam": "IPv4", 00:18:36.603 "traddr": "10.0.0.2", 00:18:36.603 "trsvcid": "4420" 00:18:36.603 }, 00:18:36.603 "peer_address": { 00:18:36.603 "trtype": "TCP", 00:18:36.603 "adrfam": "IPv4", 00:18:36.603 "traddr": "10.0.0.1", 00:18:36.603 "trsvcid": "45938" 00:18:36.603 }, 00:18:36.603 "auth": { 00:18:36.603 "state": "completed", 00:18:36.603 "digest": "sha384", 00:18:36.603 "dhgroup": "ffdhe3072" 00:18:36.603 } 00:18:36.603 } 00:18:36.603 ]' 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.603 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.862 10:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.446 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.703 00:18:37.703 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:37.703 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:37.703 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:37.962 { 00:18:37.962 "cntlid": 71, 00:18:37.962 "qid": 0, 00:18:37.962 "state": "enabled", 00:18:37.962 "listen_address": { 00:18:37.962 "trtype": "TCP", 00:18:37.962 "adrfam": "IPv4", 00:18:37.962 "traddr": "10.0.0.2", 00:18:37.962 "trsvcid": "4420" 00:18:37.962 }, 00:18:37.962 "peer_address": { 00:18:37.962 "trtype": "TCP", 00:18:37.962 "adrfam": "IPv4", 00:18:37.962 "traddr": "10.0.0.1", 00:18:37.962 "trsvcid": "45962" 00:18:37.962 }, 00:18:37.962 "auth": { 00:18:37.962 "state": "completed", 00:18:37.962 "digest": "sha384", 00:18:37.962 "dhgroup": "ffdhe3072" 00:18:37.962 } 00:18:37.962 } 00:18:37.962 ]' 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.962 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.221 10:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:38.788 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.788 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:38.788 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.789 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:39.047 00:18:39.047 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:39.047 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:39.047 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.306 { 00:18:39.306 "cntlid": 73, 00:18:39.306 "qid": 0, 00:18:39.306 "state": "enabled", 00:18:39.306 "listen_address": { 00:18:39.306 "trtype": "TCP", 00:18:39.306 "adrfam": "IPv4", 00:18:39.306 "traddr": "10.0.0.2", 00:18:39.306 "trsvcid": "4420" 00:18:39.306 }, 00:18:39.306 "peer_address": { 00:18:39.306 "trtype": "TCP", 00:18:39.306 "adrfam": "IPv4", 00:18:39.306 "traddr": "10.0.0.1", 00:18:39.306 "trsvcid": "45986" 00:18:39.306 }, 00:18:39.306 "auth": { 00:18:39.306 "state": "completed", 00:18:39.306 "digest": "sha384", 00:18:39.306 "dhgroup": "ffdhe4096" 00:18:39.306 } 00:18:39.306 } 00:18:39.306 ]' 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.306 10:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.306 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.306 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.306 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.306 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.306 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.563 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:40.129 10:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:40.388 00:18:40.388 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:40.388 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.388 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.648 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.648 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.648 10:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.648 10:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.648 10:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.648 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:40.648 { 00:18:40.648 "cntlid": 75, 00:18:40.648 "qid": 0, 00:18:40.648 "state": "enabled", 00:18:40.649 "listen_address": { 00:18:40.649 "trtype": "TCP", 00:18:40.649 "adrfam": "IPv4", 00:18:40.649 "traddr": "10.0.0.2", 00:18:40.649 "trsvcid": "4420" 00:18:40.649 }, 00:18:40.649 "peer_address": { 00:18:40.649 "trtype": "TCP", 00:18:40.649 "adrfam": "IPv4", 00:18:40.649 "traddr": "10.0.0.1", 00:18:40.649 "trsvcid": "54324" 00:18:40.649 }, 00:18:40.649 "auth": { 00:18:40.649 "state": "completed", 00:18:40.649 "digest": "sha384", 00:18:40.649 "dhgroup": "ffdhe4096" 00:18:40.649 } 00:18:40.649 } 00:18:40.649 ]' 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.649 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.907 10:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:41.474 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:41.733 00:18:41.733 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:41.733 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:41.733 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:41.993 { 00:18:41.993 "cntlid": 77, 00:18:41.993 "qid": 0, 00:18:41.993 "state": "enabled", 00:18:41.993 "listen_address": { 00:18:41.993 "trtype": "TCP", 00:18:41.993 "adrfam": "IPv4", 00:18:41.993 "traddr": "10.0.0.2", 00:18:41.993 "trsvcid": "4420" 00:18:41.993 }, 00:18:41.993 "peer_address": { 00:18:41.993 "trtype": "TCP", 00:18:41.993 "adrfam": "IPv4", 00:18:41.993 "traddr": "10.0.0.1", 00:18:41.993 "trsvcid": "54356" 00:18:41.993 }, 00:18:41.993 "auth": { 00:18:41.993 "state": "completed", 00:18:41.993 "digest": "sha384", 00:18:41.993 "dhgroup": "ffdhe4096" 00:18:41.993 } 00:18:41.993 } 00:18:41.993 ]' 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.993 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.253 10:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.819 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.076 00:18:43.076 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.076 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.076 10:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:43.337 { 00:18:43.337 "cntlid": 79, 00:18:43.337 "qid": 0, 00:18:43.337 "state": "enabled", 00:18:43.337 "listen_address": { 00:18:43.337 "trtype": "TCP", 00:18:43.337 "adrfam": "IPv4", 00:18:43.337 "traddr": "10.0.0.2", 00:18:43.337 "trsvcid": "4420" 00:18:43.337 }, 00:18:43.337 "peer_address": { 00:18:43.337 "trtype": "TCP", 00:18:43.337 "adrfam": "IPv4", 00:18:43.337 "traddr": "10.0.0.1", 00:18:43.337 "trsvcid": "54388" 00:18:43.337 }, 00:18:43.337 "auth": { 00:18:43.337 "state": "completed", 00:18:43.337 "digest": "sha384", 00:18:43.337 "dhgroup": "ffdhe4096" 00:18:43.337 } 00:18:43.337 } 00:18:43.337 ]' 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.337 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.598 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.168 10:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.168 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.734 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.734 { 00:18:44.734 "cntlid": 81, 00:18:44.734 "qid": 0, 00:18:44.734 "state": "enabled", 00:18:44.734 "listen_address": { 00:18:44.734 "trtype": "TCP", 00:18:44.734 "adrfam": "IPv4", 00:18:44.734 "traddr": "10.0.0.2", 00:18:44.734 "trsvcid": "4420" 00:18:44.734 }, 00:18:44.734 "peer_address": { 00:18:44.734 "trtype": "TCP", 00:18:44.734 "adrfam": "IPv4", 00:18:44.734 "traddr": "10.0.0.1", 00:18:44.734 "trsvcid": "54418" 00:18:44.734 }, 00:18:44.734 "auth": { 00:18:44.734 "state": "completed", 00:18:44.734 "digest": "sha384", 00:18:44.734 "dhgroup": "ffdhe6144" 00:18:44.734 } 00:18:44.734 } 00:18:44.734 ]' 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.734 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.992 10:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.561 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:45.821 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:46.078 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.078 10:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.334 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:46.334 { 00:18:46.334 "cntlid": 83, 00:18:46.334 "qid": 0, 00:18:46.334 "state": "enabled", 00:18:46.334 "listen_address": { 00:18:46.334 "trtype": "TCP", 00:18:46.334 "adrfam": "IPv4", 00:18:46.334 "traddr": "10.0.0.2", 00:18:46.334 "trsvcid": "4420" 00:18:46.334 }, 00:18:46.334 "peer_address": { 00:18:46.334 "trtype": "TCP", 00:18:46.334 "adrfam": "IPv4", 00:18:46.334 "traddr": "10.0.0.1", 00:18:46.334 "trsvcid": "54448" 00:18:46.334 }, 00:18:46.334 "auth": { 00:18:46.334 "state": "completed", 00:18:46.334 "digest": "sha384", 00:18:46.334 "dhgroup": "ffdhe6144" 00:18:46.334 } 00:18:46.334 } 00:18:46.334 ]' 00:18:46.334 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:46.334 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.334 10:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:46.334 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.334 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:46.334 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.334 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.334 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.334 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:46.899 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.899 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:46.899 10:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.899 10:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:47.158 10:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:47.419 00:18:47.419 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:47.419 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:47.419 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.680 { 00:18:47.680 "cntlid": 85, 00:18:47.680 "qid": 0, 00:18:47.680 "state": "enabled", 00:18:47.680 "listen_address": { 00:18:47.680 "trtype": "TCP", 00:18:47.680 "adrfam": "IPv4", 00:18:47.680 "traddr": "10.0.0.2", 00:18:47.680 "trsvcid": "4420" 00:18:47.680 }, 00:18:47.680 "peer_address": { 00:18:47.680 "trtype": "TCP", 00:18:47.680 "adrfam": "IPv4", 00:18:47.680 "traddr": "10.0.0.1", 00:18:47.680 "trsvcid": "54466" 00:18:47.680 }, 00:18:47.680 "auth": { 00:18:47.680 "state": "completed", 00:18:47.680 "digest": "sha384", 00:18:47.680 "dhgroup": "ffdhe6144" 00:18:47.680 } 00:18:47.680 } 00:18:47.680 ]' 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.680 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.938 10:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:48.503 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.504 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.073 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.073 { 00:18:49.073 "cntlid": 87, 00:18:49.073 "qid": 0, 00:18:49.073 "state": "enabled", 00:18:49.073 "listen_address": { 00:18:49.073 "trtype": "TCP", 00:18:49.073 "adrfam": "IPv4", 00:18:49.073 "traddr": "10.0.0.2", 00:18:49.073 "trsvcid": "4420" 00:18:49.073 }, 00:18:49.073 "peer_address": { 00:18:49.073 "trtype": "TCP", 00:18:49.073 "adrfam": "IPv4", 00:18:49.073 "traddr": "10.0.0.1", 00:18:49.073 "trsvcid": "54500" 00:18:49.073 }, 00:18:49.073 "auth": { 00:18:49.073 "state": "completed", 00:18:49.073 "digest": "sha384", 00:18:49.073 "dhgroup": "ffdhe6144" 00:18:49.073 } 00:18:49.073 } 00:18:49.073 ]' 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.073 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.333 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.333 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.333 10:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.333 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.899 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:50.157 10:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:50.416 00:18:50.416 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:50.677 { 00:18:50.677 "cntlid": 89, 00:18:50.677 "qid": 0, 00:18:50.677 "state": "enabled", 00:18:50.677 "listen_address": { 00:18:50.677 "trtype": "TCP", 00:18:50.677 "adrfam": "IPv4", 00:18:50.677 "traddr": "10.0.0.2", 00:18:50.677 "trsvcid": "4420" 00:18:50.677 }, 00:18:50.677 "peer_address": { 00:18:50.677 "trtype": "TCP", 00:18:50.677 "adrfam": "IPv4", 00:18:50.677 "traddr": "10.0.0.1", 00:18:50.677 "trsvcid": "49808" 00:18:50.677 }, 00:18:50.677 "auth": { 00:18:50.677 "state": "completed", 00:18:50.677 "digest": "sha384", 00:18:50.677 "dhgroup": "ffdhe8192" 00:18:50.677 } 00:18:50.677 } 00:18:50.677 ]' 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.677 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:50.937 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.937 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.937 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.937 10:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.502 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:51.760 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:52.019 00:18:52.019 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:52.019 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:52.019 10:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:52.279 { 00:18:52.279 "cntlid": 91, 00:18:52.279 "qid": 0, 00:18:52.279 "state": "enabled", 00:18:52.279 "listen_address": { 00:18:52.279 "trtype": "TCP", 00:18:52.279 "adrfam": "IPv4", 00:18:52.279 "traddr": "10.0.0.2", 00:18:52.279 "trsvcid": "4420" 00:18:52.279 }, 00:18:52.279 "peer_address": { 00:18:52.279 "trtype": "TCP", 00:18:52.279 "adrfam": "IPv4", 00:18:52.279 "traddr": "10.0.0.1", 00:18:52.279 "trsvcid": "49836" 00:18:52.279 }, 00:18:52.279 "auth": { 00:18:52.279 "state": "completed", 00:18:52.279 "digest": "sha384", 00:18:52.279 "dhgroup": "ffdhe8192" 00:18:52.279 } 00:18:52.279 } 00:18:52.279 ]' 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.279 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.539 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.107 10:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:53.365 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:53.622 00:18:53.880 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.880 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.880 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.880 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.880 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:53.881 { 00:18:53.881 "cntlid": 93, 00:18:53.881 "qid": 0, 00:18:53.881 "state": "enabled", 00:18:53.881 "listen_address": { 00:18:53.881 "trtype": "TCP", 00:18:53.881 "adrfam": "IPv4", 00:18:53.881 "traddr": "10.0.0.2", 00:18:53.881 "trsvcid": "4420" 00:18:53.881 }, 00:18:53.881 "peer_address": { 00:18:53.881 "trtype": "TCP", 00:18:53.881 "adrfam": "IPv4", 00:18:53.881 "traddr": "10.0.0.1", 00:18:53.881 "trsvcid": "49856" 00:18:53.881 }, 00:18:53.881 "auth": { 00:18:53.881 "state": "completed", 00:18:53.881 "digest": "sha384", 00:18:53.881 "dhgroup": "ffdhe8192" 00:18:53.881 } 00:18:53.881 } 00:18:53.881 ]' 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.881 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.140 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.140 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.140 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.140 10:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.752 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.009 10:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.267 00:18:55.267 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.267 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.267 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.554 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.554 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.554 10:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.554 10:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.554 10:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.555 { 00:18:55.555 "cntlid": 95, 00:18:55.555 "qid": 0, 00:18:55.555 "state": "enabled", 00:18:55.555 "listen_address": { 00:18:55.555 "trtype": "TCP", 00:18:55.555 "adrfam": "IPv4", 00:18:55.555 "traddr": "10.0.0.2", 00:18:55.555 "trsvcid": "4420" 00:18:55.555 }, 00:18:55.555 "peer_address": { 00:18:55.555 "trtype": "TCP", 00:18:55.555 "adrfam": "IPv4", 00:18:55.555 "traddr": "10.0.0.1", 00:18:55.555 "trsvcid": "49882" 00:18:55.555 }, 00:18:55.555 "auth": { 00:18:55.555 "state": "completed", 00:18:55.555 "digest": "sha384", 00:18:55.555 "dhgroup": "ffdhe8192" 00:18:55.555 } 00:18:55.555 } 00:18:55.555 ]' 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.555 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.814 10:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:56.384 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:56.645 00:18:56.645 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:56.645 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.645 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:56.903 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.903 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.903 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.903 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.903 10:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.903 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:56.903 { 00:18:56.903 "cntlid": 97, 00:18:56.903 "qid": 0, 00:18:56.903 "state": "enabled", 00:18:56.903 "listen_address": { 00:18:56.903 "trtype": "TCP", 00:18:56.903 "adrfam": "IPv4", 00:18:56.903 "traddr": "10.0.0.2", 00:18:56.903 "trsvcid": "4420" 00:18:56.903 }, 00:18:56.903 "peer_address": { 00:18:56.903 "trtype": "TCP", 00:18:56.903 "adrfam": "IPv4", 00:18:56.903 "traddr": "10.0.0.1", 00:18:56.903 "trsvcid": "49904" 00:18:56.903 }, 00:18:56.903 "auth": { 00:18:56.903 "state": "completed", 00:18:56.903 "digest": "sha512", 00:18:56.903 "dhgroup": "null" 00:18:56.903 } 00:18:56.903 } 00:18:56.903 ]' 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.904 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.161 10:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:57.727 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:57.986 00:18:57.986 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.986 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.986 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.246 { 00:18:58.246 "cntlid": 99, 00:18:58.246 "qid": 0, 00:18:58.246 "state": "enabled", 00:18:58.246 "listen_address": { 00:18:58.246 "trtype": "TCP", 00:18:58.246 "adrfam": "IPv4", 00:18:58.246 "traddr": "10.0.0.2", 00:18:58.246 "trsvcid": "4420" 00:18:58.246 }, 00:18:58.246 "peer_address": { 00:18:58.246 "trtype": "TCP", 00:18:58.246 "adrfam": "IPv4", 00:18:58.246 "traddr": "10.0.0.1", 00:18:58.246 "trsvcid": "49936" 00:18:58.246 }, 00:18:58.246 "auth": { 00:18:58.246 "state": "completed", 00:18:58.246 "digest": "sha512", 00:18:58.246 "dhgroup": "null" 00:18:58.246 } 00:18:58.246 } 00:18:58.246 ]' 00:18:58.246 10:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.246 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.504 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.070 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.328 10:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.328 00:18:59.328 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:59.328 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.328 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:59.588 { 00:18:59.588 "cntlid": 101, 00:18:59.588 "qid": 0, 00:18:59.588 "state": "enabled", 00:18:59.588 "listen_address": { 00:18:59.588 "trtype": "TCP", 00:18:59.588 "adrfam": "IPv4", 00:18:59.588 "traddr": "10.0.0.2", 00:18:59.588 "trsvcid": "4420" 00:18:59.588 }, 00:18:59.588 "peer_address": { 00:18:59.588 "trtype": "TCP", 00:18:59.588 "adrfam": "IPv4", 00:18:59.588 "traddr": "10.0.0.1", 00:18:59.588 "trsvcid": "49960" 00:18:59.588 }, 00:18:59.588 "auth": { 00:18:59.588 "state": "completed", 00:18:59.588 "digest": "sha512", 00:18:59.588 "dhgroup": "null" 00:18:59.588 } 00:18:59.588 } 00:18:59.588 ]' 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.588 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.848 10:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.416 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.674 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.674 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.674 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.674 00:19:00.674 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:00.674 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:00.674 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.932 { 00:19:00.932 "cntlid": 103, 00:19:00.932 "qid": 0, 00:19:00.932 "state": "enabled", 00:19:00.932 "listen_address": { 00:19:00.932 "trtype": "TCP", 00:19:00.932 "adrfam": "IPv4", 00:19:00.932 "traddr": "10.0.0.2", 00:19:00.932 "trsvcid": "4420" 00:19:00.932 }, 00:19:00.932 "peer_address": { 00:19:00.932 "trtype": "TCP", 00:19:00.932 "adrfam": "IPv4", 00:19:00.932 "traddr": "10.0.0.1", 00:19:00.932 "trsvcid": "56244" 00:19:00.932 }, 00:19:00.932 "auth": { 00:19:00.932 "state": "completed", 00:19:00.932 "digest": "sha512", 00:19:00.932 "dhgroup": "null" 00:19:00.932 } 00:19:00.932 } 00:19:00.932 ]' 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.932 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.190 10:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:19:01.758 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.759 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.759 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.759 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:01.759 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:02.017 00:19:02.017 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.017 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.017 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.276 { 00:19:02.276 "cntlid": 105, 00:19:02.276 "qid": 0, 00:19:02.276 "state": "enabled", 00:19:02.276 "listen_address": { 00:19:02.276 "trtype": "TCP", 00:19:02.276 "adrfam": "IPv4", 00:19:02.276 "traddr": "10.0.0.2", 00:19:02.276 "trsvcid": "4420" 00:19:02.276 }, 00:19:02.276 "peer_address": { 00:19:02.276 "trtype": "TCP", 00:19:02.276 "adrfam": "IPv4", 00:19:02.276 "traddr": "10.0.0.1", 00:19:02.276 "trsvcid": "56266" 00:19:02.276 }, 00:19:02.276 "auth": { 00:19:02.276 "state": "completed", 00:19:02.276 "digest": "sha512", 00:19:02.276 "dhgroup": "ffdhe2048" 00:19:02.276 } 00:19:02.276 } 00:19:02.276 ]' 00:19:02.276 10:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.276 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.534 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:03.101 10:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:03.359 00:19:03.359 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:03.359 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:03.359 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:03.617 { 00:19:03.617 "cntlid": 107, 00:19:03.617 "qid": 0, 00:19:03.617 "state": "enabled", 00:19:03.617 "listen_address": { 00:19:03.617 "trtype": "TCP", 00:19:03.617 "adrfam": "IPv4", 00:19:03.617 "traddr": "10.0.0.2", 00:19:03.617 "trsvcid": "4420" 00:19:03.617 }, 00:19:03.617 "peer_address": { 00:19:03.617 "trtype": "TCP", 00:19:03.617 "adrfam": "IPv4", 00:19:03.617 "traddr": "10.0.0.1", 00:19:03.617 "trsvcid": "56284" 00:19:03.617 }, 00:19:03.617 "auth": { 00:19:03.617 "state": "completed", 00:19:03.617 "digest": "sha512", 00:19:03.617 "dhgroup": "ffdhe2048" 00:19:03.617 } 00:19:03.617 } 00:19:03.617 ]' 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.617 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.877 10:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.442 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.443 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.703 00:19:04.703 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.703 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.703 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.961 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.961 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:04.962 { 00:19:04.962 "cntlid": 109, 00:19:04.962 "qid": 0, 00:19:04.962 "state": "enabled", 00:19:04.962 "listen_address": { 00:19:04.962 "trtype": "TCP", 00:19:04.962 "adrfam": "IPv4", 00:19:04.962 "traddr": "10.0.0.2", 00:19:04.962 "trsvcid": "4420" 00:19:04.962 }, 00:19:04.962 "peer_address": { 00:19:04.962 "trtype": "TCP", 00:19:04.962 "adrfam": "IPv4", 00:19:04.962 "traddr": "10.0.0.1", 00:19:04.962 "trsvcid": "56306" 00:19:04.962 }, 00:19:04.962 "auth": { 00:19:04.962 "state": "completed", 00:19:04.962 "digest": "sha512", 00:19:04.962 "dhgroup": "ffdhe2048" 00:19:04.962 } 00:19:04.962 } 00:19:04.962 ]' 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.962 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.221 10:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.789 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.047 00:19:06.047 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.047 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.047 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.305 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.305 10:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.305 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.305 10:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.305 { 00:19:06.305 "cntlid": 111, 00:19:06.305 "qid": 0, 00:19:06.305 "state": "enabled", 00:19:06.305 "listen_address": { 00:19:06.305 "trtype": "TCP", 00:19:06.305 "adrfam": "IPv4", 00:19:06.305 "traddr": "10.0.0.2", 00:19:06.305 "trsvcid": "4420" 00:19:06.305 }, 00:19:06.305 "peer_address": { 00:19:06.305 "trtype": "TCP", 00:19:06.305 "adrfam": "IPv4", 00:19:06.305 "traddr": "10.0.0.1", 00:19:06.305 "trsvcid": "56334" 00:19:06.305 }, 00:19:06.305 "auth": { 00:19:06.305 "state": "completed", 00:19:06.305 "digest": "sha512", 00:19:06.305 "dhgroup": "ffdhe2048" 00:19:06.305 } 00:19:06.305 } 00:19:06.305 ]' 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.305 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.564 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.133 10:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.134 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:07.134 10:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:07.393 00:19:07.393 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:07.393 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.393 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:07.653 { 00:19:07.653 "cntlid": 113, 00:19:07.653 "qid": 0, 00:19:07.653 "state": "enabled", 00:19:07.653 "listen_address": { 00:19:07.653 "trtype": "TCP", 00:19:07.653 "adrfam": "IPv4", 00:19:07.653 "traddr": "10.0.0.2", 00:19:07.653 "trsvcid": "4420" 00:19:07.653 }, 00:19:07.653 "peer_address": { 00:19:07.653 "trtype": "TCP", 00:19:07.653 "adrfam": "IPv4", 00:19:07.653 "traddr": "10.0.0.1", 00:19:07.653 "trsvcid": "56346" 00:19:07.653 }, 00:19:07.653 "auth": { 00:19:07.653 "state": "completed", 00:19:07.653 "digest": "sha512", 00:19:07.653 "dhgroup": "ffdhe3072" 00:19:07.653 } 00:19:07.653 } 00:19:07.653 ]' 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.653 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.910 10:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:08.476 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:08.736 00:19:08.736 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:08.736 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.736 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.997 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.997 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.997 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.997 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.997 10:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.997 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.997 { 00:19:08.997 "cntlid": 115, 00:19:08.997 "qid": 0, 00:19:08.998 "state": "enabled", 00:19:08.998 "listen_address": { 00:19:08.998 "trtype": "TCP", 00:19:08.998 "adrfam": "IPv4", 00:19:08.998 "traddr": "10.0.0.2", 00:19:08.998 "trsvcid": "4420" 00:19:08.998 }, 00:19:08.998 "peer_address": { 00:19:08.998 "trtype": "TCP", 00:19:08.998 "adrfam": "IPv4", 00:19:08.998 "traddr": "10.0.0.1", 00:19:08.998 "trsvcid": "56368" 00:19:08.998 }, 00:19:08.998 "auth": { 00:19:08.998 "state": "completed", 00:19:08.998 "digest": "sha512", 00:19:08.998 "dhgroup": "ffdhe3072" 00:19:08.998 } 00:19:08.998 } 00:19:08.998 ]' 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.998 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.256 10:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.822 10:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.081 10:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.081 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:10.081 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:10.081 00:19:10.081 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:10.081 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.081 10:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:10.342 { 00:19:10.342 "cntlid": 117, 00:19:10.342 "qid": 0, 00:19:10.342 "state": "enabled", 00:19:10.342 "listen_address": { 00:19:10.342 "trtype": "TCP", 00:19:10.342 "adrfam": "IPv4", 00:19:10.342 "traddr": "10.0.0.2", 00:19:10.342 "trsvcid": "4420" 00:19:10.342 }, 00:19:10.342 "peer_address": { 00:19:10.342 "trtype": "TCP", 00:19:10.342 "adrfam": "IPv4", 00:19:10.342 "traddr": "10.0.0.1", 00:19:10.342 "trsvcid": "55744" 00:19:10.342 }, 00:19:10.342 "auth": { 00:19:10.342 "state": "completed", 00:19:10.342 "digest": "sha512", 00:19:10.342 "dhgroup": "ffdhe3072" 00:19:10.342 } 00:19:10.342 } 00:19:10.342 ]' 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.342 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.637 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.206 10:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.206 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.465 00:19:11.465 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:11.465 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:11.465 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:11.723 { 00:19:11.723 "cntlid": 119, 00:19:11.723 "qid": 0, 00:19:11.723 "state": "enabled", 00:19:11.723 "listen_address": { 00:19:11.723 "trtype": "TCP", 00:19:11.723 "adrfam": "IPv4", 00:19:11.723 "traddr": "10.0.0.2", 00:19:11.723 "trsvcid": "4420" 00:19:11.723 }, 00:19:11.723 "peer_address": { 00:19:11.723 "trtype": "TCP", 00:19:11.723 "adrfam": "IPv4", 00:19:11.723 "traddr": "10.0.0.1", 00:19:11.723 "trsvcid": "55782" 00:19:11.723 }, 00:19:11.723 "auth": { 00:19:11.723 "state": "completed", 00:19:11.723 "digest": "sha512", 00:19:11.723 "dhgroup": "ffdhe3072" 00:19:11.723 } 00:19:11.723 } 00:19:11.723 ]' 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.723 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.982 10:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.552 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:19:12.553 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.553 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.553 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.553 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:12.553 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:12.811 00:19:12.811 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:12.811 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:12.812 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:13.069 { 00:19:13.069 "cntlid": 121, 00:19:13.069 "qid": 0, 00:19:13.069 "state": "enabled", 00:19:13.069 "listen_address": { 00:19:13.069 "trtype": "TCP", 00:19:13.069 "adrfam": "IPv4", 00:19:13.069 "traddr": "10.0.0.2", 00:19:13.069 "trsvcid": "4420" 00:19:13.069 }, 00:19:13.069 "peer_address": { 00:19:13.069 "trtype": "TCP", 00:19:13.069 "adrfam": "IPv4", 00:19:13.069 "traddr": "10.0.0.1", 00:19:13.069 "trsvcid": "55816" 00:19:13.069 }, 00:19:13.069 "auth": { 00:19:13.069 "state": "completed", 00:19:13.069 "digest": "sha512", 00:19:13.069 "dhgroup": "ffdhe4096" 00:19:13.069 } 00:19:13.069 } 00:19:13.069 ]' 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.069 10:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.327 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.895 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:14.155 10:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:14.155 00:19:14.415 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.415 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:14.416 { 00:19:14.416 "cntlid": 123, 00:19:14.416 "qid": 0, 00:19:14.416 "state": "enabled", 00:19:14.416 "listen_address": { 00:19:14.416 "trtype": "TCP", 00:19:14.416 "adrfam": "IPv4", 00:19:14.416 "traddr": "10.0.0.2", 00:19:14.416 "trsvcid": "4420" 00:19:14.416 }, 00:19:14.416 "peer_address": { 00:19:14.416 "trtype": "TCP", 00:19:14.416 "adrfam": "IPv4", 00:19:14.416 "traddr": "10.0.0.1", 00:19:14.416 "trsvcid": "55840" 00:19:14.416 }, 00:19:14.416 "auth": { 00:19:14.416 "state": "completed", 00:19:14.416 "digest": "sha512", 00:19:14.416 "dhgroup": "ffdhe4096" 00:19:14.416 } 00:19:14.416 } 00:19:14.416 ]' 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.416 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:14.674 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.674 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.674 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.674 10:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.240 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.500 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.760 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:15.760 { 00:19:15.760 "cntlid": 125, 00:19:15.760 "qid": 0, 00:19:15.760 "state": "enabled", 00:19:15.760 "listen_address": { 00:19:15.760 "trtype": "TCP", 00:19:15.760 "adrfam": "IPv4", 00:19:15.760 "traddr": "10.0.0.2", 00:19:15.760 "trsvcid": "4420" 00:19:15.760 }, 00:19:15.760 "peer_address": { 00:19:15.760 "trtype": "TCP", 00:19:15.760 "adrfam": "IPv4", 00:19:15.760 "traddr": "10.0.0.1", 00:19:15.760 "trsvcid": "55866" 00:19:15.760 }, 00:19:15.760 "auth": { 00:19:15.760 "state": "completed", 00:19:15.760 "digest": "sha512", 00:19:15.760 "dhgroup": "ffdhe4096" 00:19:15.760 } 00:19:15.760 } 00:19:15.760 ]' 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.760 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:16.018 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.018 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:16.018 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.018 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.018 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.018 10:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.584 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.841 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.099 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.099 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.100 10:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.100 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:17.100 { 00:19:17.100 "cntlid": 127, 00:19:17.100 "qid": 0, 00:19:17.100 "state": "enabled", 00:19:17.100 "listen_address": { 00:19:17.100 "trtype": "TCP", 00:19:17.100 "adrfam": "IPv4", 00:19:17.100 "traddr": "10.0.0.2", 00:19:17.100 "trsvcid": "4420" 00:19:17.100 }, 00:19:17.100 "peer_address": { 00:19:17.100 "trtype": "TCP", 00:19:17.100 "adrfam": "IPv4", 00:19:17.100 "traddr": "10.0.0.1", 00:19:17.100 "trsvcid": "55898" 00:19:17.100 }, 00:19:17.100 "auth": { 00:19:17.100 "state": "completed", 00:19:17.100 "digest": "sha512", 00:19:17.100 "dhgroup": "ffdhe4096" 00:19:17.100 } 00:19:17.100 } 00:19:17.100 ]' 00:19:17.100 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:17.359 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.359 10:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:17.359 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.359 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:17.359 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.359 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.359 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.359 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.926 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.186 10:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.444 00:19:18.444 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:18.444 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:18.444 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:18.702 { 00:19:18.702 "cntlid": 129, 00:19:18.702 "qid": 0, 00:19:18.702 "state": "enabled", 00:19:18.702 "listen_address": { 00:19:18.702 "trtype": "TCP", 00:19:18.702 "adrfam": "IPv4", 00:19:18.702 "traddr": "10.0.0.2", 00:19:18.702 "trsvcid": "4420" 00:19:18.702 }, 00:19:18.702 "peer_address": { 00:19:18.702 "trtype": "TCP", 00:19:18.702 "adrfam": "IPv4", 00:19:18.702 "traddr": "10.0.0.1", 00:19:18.702 "trsvcid": "55912" 00:19:18.702 }, 00:19:18.702 "auth": { 00:19:18.702 "state": "completed", 00:19:18.702 "digest": "sha512", 00:19:18.702 "dhgroup": "ffdhe6144" 00:19:18.702 } 00:19:18.702 } 00:19:18.702 ]' 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.702 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.959 10:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:19.527 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:20.095 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:20.095 { 00:19:20.095 "cntlid": 131, 00:19:20.095 "qid": 0, 00:19:20.095 "state": "enabled", 00:19:20.095 "listen_address": { 00:19:20.095 "trtype": "TCP", 00:19:20.095 "adrfam": "IPv4", 00:19:20.095 "traddr": "10.0.0.2", 00:19:20.095 "trsvcid": "4420" 00:19:20.095 }, 00:19:20.095 "peer_address": { 00:19:20.095 "trtype": "TCP", 00:19:20.095 "adrfam": "IPv4", 00:19:20.095 "traddr": "10.0.0.1", 00:19:20.095 "trsvcid": "55946" 00:19:20.095 }, 00:19:20.095 "auth": { 00:19:20.095 "state": "completed", 00:19:20.095 "digest": "sha512", 00:19:20.095 "dhgroup": "ffdhe6144" 00:19:20.095 } 00:19:20.095 } 00:19:20.095 ]' 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.095 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:20.351 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.351 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.351 10:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.351 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.915 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:21.175 10:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:21.433 00:19:21.433 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:21.433 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:21.434 { 00:19:21.434 "cntlid": 133, 00:19:21.434 "qid": 0, 00:19:21.434 "state": "enabled", 00:19:21.434 "listen_address": { 00:19:21.434 "trtype": "TCP", 00:19:21.434 "adrfam": "IPv4", 00:19:21.434 "traddr": "10.0.0.2", 00:19:21.434 "trsvcid": "4420" 00:19:21.434 }, 00:19:21.434 "peer_address": { 00:19:21.434 "trtype": "TCP", 00:19:21.434 "adrfam": "IPv4", 00:19:21.434 "traddr": "10.0.0.1", 00:19:21.434 "trsvcid": "42184" 00:19:21.434 }, 00:19:21.434 "auth": { 00:19:21.434 "state": "completed", 00:19:21.434 "digest": "sha512", 00:19:21.434 "dhgroup": "ffdhe6144" 00:19:21.434 } 00:19:21.434 } 00:19:21.434 ]' 00:19:21.434 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.693 10:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:19:22.260 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.260 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:22.260 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.260 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.518 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.775 00:19:22.775 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:22.775 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:22.775 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:23.035 { 00:19:23.035 "cntlid": 135, 00:19:23.035 "qid": 0, 00:19:23.035 "state": "enabled", 00:19:23.035 "listen_address": { 00:19:23.035 "trtype": "TCP", 00:19:23.035 "adrfam": "IPv4", 00:19:23.035 "traddr": "10.0.0.2", 00:19:23.035 "trsvcid": "4420" 00:19:23.035 }, 00:19:23.035 "peer_address": { 00:19:23.035 "trtype": "TCP", 00:19:23.035 "adrfam": "IPv4", 00:19:23.035 "traddr": "10.0.0.1", 00:19:23.035 "trsvcid": "42212" 00:19:23.035 }, 00:19:23.035 "auth": { 00:19:23.035 "state": "completed", 00:19:23.035 "digest": "sha512", 00:19:23.035 "dhgroup": "ffdhe6144" 00:19:23.035 } 00:19:23.035 } 00:19:23.035 ]' 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.035 10:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.295 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.861 10:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.118 10:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.118 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.118 10:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.377 00:19:24.377 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:24.377 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.377 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:24.637 { 00:19:24.637 "cntlid": 137, 00:19:24.637 "qid": 0, 00:19:24.637 "state": "enabled", 00:19:24.637 "listen_address": { 00:19:24.637 "trtype": "TCP", 00:19:24.637 "adrfam": "IPv4", 00:19:24.637 "traddr": "10.0.0.2", 00:19:24.637 "trsvcid": "4420" 00:19:24.637 }, 00:19:24.637 "peer_address": { 00:19:24.637 "trtype": "TCP", 00:19:24.637 "adrfam": "IPv4", 00:19:24.637 "traddr": "10.0.0.1", 00:19:24.637 "trsvcid": "42244" 00:19:24.637 }, 00:19:24.637 "auth": { 00:19:24.637 "state": "completed", 00:19:24.637 "digest": "sha512", 00:19:24.637 "dhgroup": "ffdhe8192" 00:19:24.637 } 00:19:24.637 } 00:19:24.637 ]' 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.637 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.896 10:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.463 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:25.464 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.028 00:19:26.028 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:26.028 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:26.028 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:26.287 { 00:19:26.287 "cntlid": 139, 00:19:26.287 "qid": 0, 00:19:26.287 "state": "enabled", 00:19:26.287 "listen_address": { 00:19:26.287 "trtype": "TCP", 00:19:26.287 "adrfam": "IPv4", 00:19:26.287 "traddr": "10.0.0.2", 00:19:26.287 "trsvcid": "4420" 00:19:26.287 }, 00:19:26.287 "peer_address": { 00:19:26.287 "trtype": "TCP", 00:19:26.287 "adrfam": "IPv4", 00:19:26.287 "traddr": "10.0.0.1", 00:19:26.287 "trsvcid": "42278" 00:19:26.287 }, 00:19:26.287 "auth": { 00:19:26.287 "state": "completed", 00:19:26.287 "digest": "sha512", 00:19:26.287 "dhgroup": "ffdhe8192" 00:19:26.287 } 00:19:26.287 } 00:19:26.287 ]' 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.287 10:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:26.287 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.287 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.287 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.546 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:01:YzViNzVkNzk0NGVhN2JjY2YxNDI4MWI0Yjc5YzZkYznm6H6A: 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key2 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.114 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:27.115 10:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:27.711 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:27.711 { 00:19:27.711 "cntlid": 141, 00:19:27.711 "qid": 0, 00:19:27.711 "state": "enabled", 00:19:27.711 "listen_address": { 00:19:27.711 "trtype": "TCP", 00:19:27.711 "adrfam": "IPv4", 00:19:27.711 "traddr": "10.0.0.2", 00:19:27.711 "trsvcid": "4420" 00:19:27.711 }, 00:19:27.711 "peer_address": { 00:19:27.711 "trtype": "TCP", 00:19:27.711 "adrfam": "IPv4", 00:19:27.711 "traddr": "10.0.0.1", 00:19:27.711 "trsvcid": "42304" 00:19:27.711 }, 00:19:27.711 "auth": { 00:19:27.711 "state": "completed", 00:19:27.711 "digest": "sha512", 00:19:27.711 "dhgroup": "ffdhe8192" 00:19:27.711 } 00:19:27.711 } 00:19:27.711 ]' 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.711 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.969 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.969 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.969 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.969 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:02:NWY0ZTk2NjFkNmIzMTkyMzAyMGNmYzA2ZDE3MjAyMzkyOGYzMTE0YmU5ZTVhYzRkY2Tpyw==: 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.537 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key3 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.796 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.054 00:19:29.313 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:29.313 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:29.313 10:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:29.313 { 00:19:29.313 "cntlid": 143, 00:19:29.313 "qid": 0, 00:19:29.313 "state": "enabled", 00:19:29.313 "listen_address": { 00:19:29.313 "trtype": "TCP", 00:19:29.313 "adrfam": "IPv4", 00:19:29.313 "traddr": "10.0.0.2", 00:19:29.313 "trsvcid": "4420" 00:19:29.313 }, 00:19:29.313 "peer_address": { 00:19:29.313 "trtype": "TCP", 00:19:29.313 "adrfam": "IPv4", 00:19:29.313 "traddr": "10.0.0.1", 00:19:29.313 "trsvcid": "42344" 00:19:29.313 }, 00:19:29.313 "auth": { 00:19:29.313 "state": "completed", 00:19:29.313 "digest": "sha512", 00:19:29.313 "dhgroup": "ffdhe8192" 00:19:29.313 } 00:19:29.313 } 00:19:29.313 ]' 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.313 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.571 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:03:NDQ0MWJjMTNkZmRlYzFjODMxODBmMTM1Njg1NzFkY2VkZjMyNGFhMzM5Y2I4ZGJmZTIwM2U2MTJjNWU1NzViMJZeToI=: 00:19:30.139 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.139 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:30.139 10:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.139 10:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.139 10:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.140 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:30.140 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:19:30.140 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:30.140 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.140 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.140 10:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.397 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key0 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:30.398 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:30.657 00:19:30.657 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.657 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.657 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:30.916 { 00:19:30.916 "cntlid": 145, 00:19:30.916 "qid": 0, 00:19:30.916 "state": "enabled", 00:19:30.916 "listen_address": { 00:19:30.916 "trtype": "TCP", 00:19:30.916 "adrfam": "IPv4", 00:19:30.916 "traddr": "10.0.0.2", 00:19:30.916 "trsvcid": "4420" 00:19:30.916 }, 00:19:30.916 "peer_address": { 00:19:30.916 "trtype": "TCP", 00:19:30.916 "adrfam": "IPv4", 00:19:30.916 "traddr": "10.0.0.1", 00:19:30.916 "trsvcid": "53852" 00:19:30.916 }, 00:19:30.916 "auth": { 00:19:30.916 "state": "completed", 00:19:30.916 "digest": "sha512", 00:19:30.916 "dhgroup": "ffdhe8192" 00:19:30.916 } 00:19:30.916 } 00:19:30.916 ]' 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.916 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:31.174 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.174 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.174 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.174 10:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid 00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-secret DHHC-1:00:NTBiNzk3YWMwMTBjMzZmMmM1Yzc0ODk4ZTM0ZDQ1YjhkZmZiOTczNzE3NDExNTA3xM+VHA==: 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --dhchap-key key1 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:31.742 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:32.309 request: 00:19:32.309 { 00:19:32.309 "name": "nvme0", 00:19:32.309 "trtype": "tcp", 00:19:32.309 "traddr": "10.0.0.2", 00:19:32.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3", 00:19:32.309 "adrfam": "ipv4", 00:19:32.309 "trsvcid": "4420", 00:19:32.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:32.309 "dhchap_key": "key2", 00:19:32.309 "method": "bdev_nvme_attach_controller", 00:19:32.309 "req_id": 1 00:19:32.309 } 00:19:32.309 Got JSON-RPC error response 00:19:32.309 response: 00:19:32.309 { 00:19:32.309 "code": -32602, 00:19:32.309 "message": "Invalid parameters" 00:19:32.309 } 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2693899 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2693899 ']' 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2693899 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2693899 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2693899' 00:19:32.309 killing process with pid 2693899 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2693899 00:19:32.309 10:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2693899 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.246 rmmod nvme_tcp 00:19:33.246 rmmod nvme_fabrics 00:19:33.246 rmmod nvme_keyring 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2693588 ']' 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2693588 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2693588 ']' 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2693588 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2693588 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2693588' 00:19:33.246 killing process with pid 2693588 00:19:33.246 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2693588 00:19:33.247 10:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2693588 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.506 10:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.042 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:36.042 10:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zUE /tmp/spdk.key-sha256.C99 /tmp/spdk.key-sha384.Mue /tmp/spdk.key-sha512.n9z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf-auth.log 00:19:36.042 00:19:36.042 real 1m54.541s 00:19:36.042 user 4m13.947s 00:19:36.042 sys 0m16.063s 00:19:36.042 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:36.042 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.042 ************************************ 00:19:36.042 END TEST nvmf_auth_target 00:19:36.042 ************************************ 00:19:36.042 10:38:51 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:36.042 10:38:51 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:36.042 10:38:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:19:36.042 10:38:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:36.042 10:38:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:36.042 ************************************ 00:19:36.042 START TEST nvmf_bdevio_no_huge 00:19:36.042 ************************************ 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:36.042 * Looking for test storage... 00:19:36.042 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.042 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:36.043 10:38:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.604 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.604 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:42.604 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:42.604 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:42.604 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:42.604 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:42.605 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:42.605 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:42.605 Found net devices under 0000:27:00.0: cvl_0_0 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:42.605 Found net devices under 0000:27:00.1: cvl_0_1 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:42.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:42.605 00:19:42.605 --- 10.0.0.2 ping statistics --- 00:19:42.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.605 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:19:42.605 00:19:42.605 --- 10.0.0.1 ping statistics --- 00:19:42.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.605 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2719121 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2719121 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 2719121 ']' 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:42.605 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.606 10:38:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:42.606 [2024-05-15 10:38:57.688558] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:19:42.606 [2024-05-15 10:38:57.688662] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:42.606 [2024-05-15 10:38:57.829414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.606 [2024-05-15 10:38:57.952426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.606 [2024-05-15 10:38:57.952469] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.606 [2024-05-15 10:38:57.952479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.606 [2024-05-15 10:38:57.952489] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.606 [2024-05-15 10:38:57.952496] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.606 [2024-05-15 10:38:57.952710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:42.606 [2024-05-15 10:38:57.952853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:42.606 [2024-05-15 10:38:57.953024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.606 [2024-05-15 10:38:57.953078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.606 [2024-05-15 10:38:58.432686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.606 Malloc0 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.606 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.867 [2024-05-15 10:38:58.490360] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:42.867 [2024-05-15 10:38:58.490687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.867 { 00:19:42.867 "params": { 00:19:42.867 "name": "Nvme$subsystem", 00:19:42.867 "trtype": "$TEST_TRANSPORT", 00:19:42.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.867 "adrfam": "ipv4", 00:19:42.867 "trsvcid": "$NVMF_PORT", 00:19:42.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.867 "hdgst": ${hdgst:-false}, 00:19:42.867 "ddgst": ${ddgst:-false} 00:19:42.867 }, 00:19:42.867 "method": "bdev_nvme_attach_controller" 00:19:42.867 } 00:19:42.867 EOF 00:19:42.867 )") 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:42.867 10:38:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:42.867 "params": { 00:19:42.867 "name": "Nvme1", 00:19:42.867 "trtype": "tcp", 00:19:42.867 "traddr": "10.0.0.2", 00:19:42.867 "adrfam": "ipv4", 00:19:42.867 "trsvcid": "4420", 00:19:42.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.867 "hdgst": false, 00:19:42.867 "ddgst": false 00:19:42.867 }, 00:19:42.867 "method": "bdev_nvme_attach_controller" 00:19:42.867 }' 00:19:42.867 [2024-05-15 10:38:58.575171] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:19:42.867 [2024-05-15 10:38:58.575302] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2719431 ] 00:19:42.867 [2024-05-15 10:38:58.726922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.128 [2024-05-15 10:38:58.847554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.128 [2024-05-15 10:38:58.847653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.128 [2024-05-15 10:38:58.847660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.388 I/O targets: 00:19:43.388 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:43.388 00:19:43.388 00:19:43.388 CUnit - A unit testing framework for C - Version 2.1-3 00:19:43.388 http://cunit.sourceforge.net/ 00:19:43.388 00:19:43.388 00:19:43.388 Suite: bdevio tests on: Nvme1n1 00:19:43.647 Test: blockdev write read block ...passed 00:19:43.647 Test: blockdev write zeroes read block ...passed 00:19:43.647 Test: blockdev write zeroes read no split ...passed 00:19:43.647 Test: blockdev write zeroes read split ...passed 00:19:43.647 Test: blockdev write zeroes read split partial ...passed 00:19:43.647 Test: blockdev reset ...[2024-05-15 10:38:59.357829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.647 [2024-05-15 10:38:59.357937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f600 (9): Bad file descriptor 00:19:43.647 [2024-05-15 10:38:59.461588] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:43.647 passed 00:19:43.647 Test: blockdev write read 8 blocks ...passed 00:19:43.647 Test: blockdev write read size > 128k ...passed 00:19:43.647 Test: blockdev write read invalid size ...passed 00:19:43.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:43.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:43.647 Test: blockdev write read max offset ...passed 00:19:43.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:43.905 Test: blockdev writev readv 8 blocks ...passed 00:19:43.905 Test: blockdev writev readv 30 x 1block ...passed 00:19:43.905 Test: blockdev writev readv block ...passed 00:19:43.905 Test: blockdev writev readv size > 128k ...passed 00:19:43.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:43.905 Test: blockdev comparev and writev ...[2024-05-15 10:38:59.678882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.678924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.678943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.678952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.679243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.679253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.679266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.679273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.679561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.679570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.679583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.679591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.679875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.679885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.679899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:43.905 [2024-05-15 10:38:59.679907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:43.905 passed 00:19:43.905 Test: blockdev nvme passthru rw ...passed 00:19:43.905 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:38:59.763595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.905 [2024-05-15 10:38:59.763618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.763757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.905 [2024-05-15 10:38:59.763770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.763900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.905 [2024-05-15 10:38:59.763909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:43.905 [2024-05-15 10:38:59.764048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.905 [2024-05-15 10:38:59.764058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:43.905 passed 00:19:43.905 Test: blockdev nvme admin passthru ...passed 00:19:44.164 Test: blockdev copy ...passed 00:19:44.164 00:19:44.164 Run Summary: Type Total Ran Passed Failed Inactive 00:19:44.164 suites 1 1 n/a 0 0 00:19:44.164 tests 23 23 23 0 0 00:19:44.164 asserts 152 152 152 0 n/a 00:19:44.164 00:19:44.164 Elapsed time = 1.202 seconds 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.425 rmmod nvme_tcp 00:19:44.425 rmmod nvme_fabrics 00:19:44.425 rmmod nvme_keyring 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2719121 ']' 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2719121 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 2719121 ']' 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 2719121 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:44.425 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2719121 00:19:44.685 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:19:44.685 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:19:44.685 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2719121' 00:19:44.685 killing process with pid 2719121 00:19:44.685 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 2719121 00:19:44.685 [2024-05-15 10:39:00.307945] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:44.685 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 2719121 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.946 10:39:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.478 10:39:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.478 00:19:47.478 real 0m11.294s 00:19:47.478 user 0m15.291s 00:19:47.478 sys 0m5.429s 00:19:47.478 10:39:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:47.478 10:39:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:47.478 ************************************ 00:19:47.478 END TEST nvmf_bdevio_no_huge 00:19:47.478 ************************************ 00:19:47.478 10:39:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.478 10:39:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:47.478 10:39:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:47.478 10:39:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:47.478 ************************************ 00:19:47.478 START TEST nvmf_tls 00:19:47.478 ************************************ 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.478 * Looking for test storage... 00:19:47.478 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.478 10:39:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.479 10:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:52.762 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:52.762 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:52.762 Found net devices under 0000:27:00.0: cvl_0_0 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:52.762 Found net devices under 0000:27:00.1: cvl_0_1 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:19:52.762 00:19:52.762 --- 10.0.0.2 ping statistics --- 00:19:52.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.762 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:19:52.762 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:19:52.763 00:19:52.763 --- 10.0.0.1 ping statistics --- 00:19:52.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.763 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2724164 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2724164 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2724164 ']' 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:52.763 10:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.763 [2024-05-15 10:39:08.368848] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:19:52.763 [2024-05-15 10:39:08.368952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.763 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.763 [2024-05-15 10:39:08.496996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.763 [2024-05-15 10:39:08.595314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.763 [2024-05-15 10:39:08.595352] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.763 [2024-05-15 10:39:08.595361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.763 [2024-05-15 10:39:08.595371] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.763 [2024-05-15 10:39:08.595379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.763 [2024-05-15 10:39:08.595414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:53.359 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:53.359 true 00:19:53.617 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.617 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:53.617 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:53.617 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:53.617 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:53.874 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:53.874 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.874 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:53.874 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:53.874 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:54.134 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:54.134 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.134 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:54.134 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:54.134 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.134 10:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:54.393 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:54.393 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:54.393 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:54.393 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.393 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:54.651 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:54.651 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:54.651 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:54.651 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.651 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.TDEt3UY5ql 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.8urR9QNIML 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.TDEt3UY5ql 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.8urR9QNIML 00:19:54.911 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:55.169 10:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:55.427 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.TDEt3UY5ql 00:19:55.427 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.TDEt3UY5ql 00:19:55.427 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.427 [2024-05-15 10:39:11.197617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.427 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.685 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.685 [2024-05-15 10:39:11.469631] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:55.685 [2024-05-15 10:39:11.469724] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.685 [2024-05-15 10:39:11.469935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.685 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.944 malloc0 00:19:55.945 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.945 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TDEt3UY5ql 00:19:56.203 [2024-05-15 10:39:11.883263] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:56.203 10:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TDEt3UY5ql 00:19:56.203 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.196 Initializing NVMe Controllers 00:20:06.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.196 Initialization complete. Launching workers. 00:20:06.196 ======================================================== 00:20:06.196 Latency(us) 00:20:06.196 Device Information : IOPS MiB/s Average min max 00:20:06.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17257.69 67.41 3708.83 1097.10 5403.59 00:20:06.196 ======================================================== 00:20:06.196 Total : 17257.69 67.41 3708.83 1097.10 5403.59 00:20:06.196 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TDEt3UY5ql 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TDEt3UY5ql' 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2726868 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2726868 /var/tmp/bdevperf.sock 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2726868 ']' 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.454 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.454 [2024-05-15 10:39:22.142081] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:06.454 [2024-05-15 10:39:22.142205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726868 ] 00:20:06.454 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.454 [2024-05-15 10:39:22.251489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.712 [2024-05-15 10:39:22.346861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.971 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:06.971 10:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:06.971 10:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TDEt3UY5ql 00:20:07.229 [2024-05-15 10:39:22.947846] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.229 [2024-05-15 10:39:22.947978] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:07.229 TLSTESTn1 00:20:07.229 10:39:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:07.229 Running I/O for 10 seconds... 00:20:19.428 00:20:19.428 Latency(us) 00:20:19.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.428 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.428 Verification LBA range: start 0x0 length 0x2000 00:20:19.428 TLSTESTn1 : 10.02 5796.61 22.64 0.00 0.00 22040.69 5829.25 30353.52 00:20:19.428 =================================================================================================================== 00:20:19.428 Total : 5796.61 22.64 0.00 0.00 22040.69 5829.25 30353.52 00:20:19.428 0 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2726868 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2726868 ']' 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2726868 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2726868 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2726868' 00:20:19.428 killing process with pid 2726868 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2726868 00:20:19.428 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.428 00:20:19.428 Latency(us) 00:20:19.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.428 =================================================================================================================== 00:20:19.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.428 [2024-05-15 10:39:33.168947] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2726868 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8urR9QNIML 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:19.428 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8urR9QNIML 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8urR9QNIML 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8urR9QNIML' 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2728966 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2728966 /var/tmp/bdevperf.sock 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2728966 ']' 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:19.429 10:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.429 [2024-05-15 10:39:33.609968] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:19.429 [2024-05-15 10:39:33.610058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728966 ] 00:20:19.429 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.429 [2024-05-15 10:39:33.695612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.429 [2024-05-15 10:39:33.792825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8urR9QNIML 00:20:19.429 [2024-05-15 10:39:34.481485] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.429 [2024-05-15 10:39:34.481614] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.429 [2024-05-15 10:39:34.488911] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.429 [2024-05-15 10:39:34.489166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (107): Transport endpoint is not connected 00:20:19.429 [2024-05-15 10:39:34.490145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:20:19.429 [2024-05-15 10:39:34.491140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.429 [2024-05-15 10:39:34.491159] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.429 [2024-05-15 10:39:34.491173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.429 request: 00:20:19.429 { 00:20:19.429 "name": "TLSTEST", 00:20:19.429 "trtype": "tcp", 00:20:19.429 "traddr": "10.0.0.2", 00:20:19.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.429 "adrfam": "ipv4", 00:20:19.429 "trsvcid": "4420", 00:20:19.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.429 "psk": "/tmp/tmp.8urR9QNIML", 00:20:19.429 "method": "bdev_nvme_attach_controller", 00:20:19.429 "req_id": 1 00:20:19.429 } 00:20:19.429 Got JSON-RPC error response 00:20:19.429 response: 00:20:19.429 { 00:20:19.429 "code": -32602, 00:20:19.429 "message": "Invalid parameters" 00:20:19.429 } 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2728966 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2728966 ']' 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2728966 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2728966 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2728966' 00:20:19.429 killing process with pid 2728966 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2728966 00:20:19.429 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.429 00:20:19.429 Latency(us) 00:20:19.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.429 =================================================================================================================== 00:20:19.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.429 [2024-05-15 10:39:34.562506] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2728966 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TDEt3UY5ql 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TDEt3UY5ql 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TDEt3UY5ql 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TDEt3UY5ql' 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2729253 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2729253 /var/tmp/bdevperf.sock 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2729253 ']' 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.429 10:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.429 [2024-05-15 10:39:35.002274] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:19.429 [2024-05-15 10:39:35.002390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729253 ] 00:20:19.429 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.429 [2024-05-15 10:39:35.114116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.429 [2024-05-15 10:39:35.212065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.997 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:19.997 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:19.997 10:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.TDEt3UY5ql 00:20:19.997 [2024-05-15 10:39:35.853689] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.997 [2024-05-15 10:39:35.853841] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.997 [2024-05-15 10:39:35.861768] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:19.997 [2024-05-15 10:39:35.861801] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:19.997 [2024-05-15 10:39:35.861843] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.997 [2024-05-15 10:39:35.862016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (107): Transport endpoint is not connected 00:20:19.997 [2024-05-15 10:39:35.862995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:20:19.997 [2024-05-15 10:39:35.863991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.997 [2024-05-15 10:39:35.864009] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.997 [2024-05-15 10:39:35.864023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.997 request: 00:20:19.997 { 00:20:19.997 "name": "TLSTEST", 00:20:19.997 "trtype": "tcp", 00:20:19.997 "traddr": "10.0.0.2", 00:20:19.997 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.997 "adrfam": "ipv4", 00:20:19.997 "trsvcid": "4420", 00:20:19.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.997 "psk": "/tmp/tmp.TDEt3UY5ql", 00:20:19.997 "method": "bdev_nvme_attach_controller", 00:20:19.997 "req_id": 1 00:20:19.997 } 00:20:19.997 Got JSON-RPC error response 00:20:19.997 response: 00:20:19.997 { 00:20:19.997 "code": -32602, 00:20:19.997 "message": "Invalid parameters" 00:20:19.997 } 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2729253 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2729253 ']' 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2729253 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2729253 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2729253' 00:20:20.257 killing process with pid 2729253 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2729253 00:20:20.257 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.257 00:20:20.257 Latency(us) 00:20:20.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.257 =================================================================================================================== 00:20:20.257 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.257 [2024-05-15 10:39:35.931581] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.257 10:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2729253 00:20:20.516 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:20.516 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TDEt3UY5ql 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TDEt3UY5ql 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TDEt3UY5ql 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TDEt3UY5ql' 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2729556 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2729556 /var/tmp/bdevperf.sock 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2729556 ']' 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.517 10:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.774 [2024-05-15 10:39:36.401938] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:20.774 [2024-05-15 10:39:36.402092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729556 ] 00:20:20.774 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.774 [2024-05-15 10:39:36.526884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.774 [2024-05-15 10:39:36.622947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.339 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:21.339 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:21.339 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TDEt3UY5ql 00:20:21.598 [2024-05-15 10:39:37.239572] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.598 [2024-05-15 10:39:37.239695] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:21.598 [2024-05-15 10:39:37.248458] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:21.598 [2024-05-15 10:39:37.248489] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:21.598 [2024-05-15 10:39:37.248525] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.598 [2024-05-15 10:39:37.249375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (107): Transport endpoint is not connected 00:20:21.598 [2024-05-15 10:39:37.250355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:20:21.598 [2024-05-15 10:39:37.251350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:21.598 [2024-05-15 10:39:37.251375] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.598 [2024-05-15 10:39:37.251387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:21.598 request: 00:20:21.598 { 00:20:21.598 "name": "TLSTEST", 00:20:21.598 "trtype": "tcp", 00:20:21.598 "traddr": "10.0.0.2", 00:20:21.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.598 "adrfam": "ipv4", 00:20:21.598 "trsvcid": "4420", 00:20:21.598 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.598 "psk": "/tmp/tmp.TDEt3UY5ql", 00:20:21.598 "method": "bdev_nvme_attach_controller", 00:20:21.598 "req_id": 1 00:20:21.598 } 00:20:21.598 Got JSON-RPC error response 00:20:21.599 response: 00:20:21.599 { 00:20:21.599 "code": -32602, 00:20:21.599 "message": "Invalid parameters" 00:20:21.599 } 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2729556 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2729556 ']' 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2729556 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2729556 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2729556' 00:20:21.599 killing process with pid 2729556 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2729556 00:20:21.599 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.599 00:20:21.599 Latency(us) 00:20:21.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.599 =================================================================================================================== 00:20:21.599 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.599 [2024-05-15 10:39:37.307901] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:21.599 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2729556 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:21.860 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2729862 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2729862 /var/tmp/bdevperf.sock 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2729862 ']' 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 10:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.122 [2024-05-15 10:39:37.755561] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:22.122 [2024-05-15 10:39:37.755700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729862 ] 00:20:22.122 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.122 [2024-05-15 10:39:37.885743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.122 [2024-05-15 10:39:37.982914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.690 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:22.690 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:22.690 10:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.948 [2024-05-15 10:39:38.608974] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:22.948 [2024-05-15 10:39:38.610840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:20:22.948 [2024-05-15 10:39:38.611834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.948 [2024-05-15 10:39:38.611851] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:22.948 [2024-05-15 10:39:38.611866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.948 request: 00:20:22.948 { 00:20:22.948 "name": "TLSTEST", 00:20:22.948 "trtype": "tcp", 00:20:22.948 "traddr": "10.0.0.2", 00:20:22.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.948 "adrfam": "ipv4", 00:20:22.948 "trsvcid": "4420", 00:20:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.948 "method": "bdev_nvme_attach_controller", 00:20:22.948 "req_id": 1 00:20:22.948 } 00:20:22.948 Got JSON-RPC error response 00:20:22.948 response: 00:20:22.948 { 00:20:22.948 "code": -32602, 00:20:22.948 "message": "Invalid parameters" 00:20:22.948 } 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2729862 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2729862 ']' 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2729862 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2729862 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2729862' 00:20:22.948 killing process with pid 2729862 00:20:22.948 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2729862 00:20:22.949 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.949 00:20:22.949 Latency(us) 00:20:22.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.949 =================================================================================================================== 00:20:22.949 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.949 10:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2729862 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2724164 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2724164 ']' 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2724164 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:23.208 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2724164 00:20:23.468 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:23.468 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:23.468 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2724164' 00:20:23.468 killing process with pid 2724164 00:20:23.468 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2724164 00:20:23.468 [2024-05-15 10:39:39.083052] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:23.468 [2024-05-15 10:39:39.083098] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:23.468 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2724164 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.OW7pJdQFBN 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.OW7pJdQFBN 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2730198 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2730198 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2730198 ']' 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:24.039 10:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.039 [2024-05-15 10:39:39.759709] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:24.039 [2024-05-15 10:39:39.759815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.039 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.039 [2024-05-15 10:39:39.878626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.330 [2024-05-15 10:39:39.978168] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.330 [2024-05-15 10:39:39.978226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.330 [2024-05-15 10:39:39.978236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.330 [2024-05-15 10:39:39.978246] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.330 [2024-05-15 10:39:39.978255] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.330 [2024-05-15 10:39:39.978299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.OW7pJdQFBN 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OW7pJdQFBN 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:24.932 [2024-05-15 10:39:40.699981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.932 10:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.191 10:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.191 [2024-05-15 10:39:40.971994] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:25.191 [2024-05-15 10:39:40.972106] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.191 [2024-05-15 10:39:40.972332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.191 10:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.450 malloc0 00:20:25.450 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.450 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:25.708 [2024-05-15 10:39:41.414353] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OW7pJdQFBN 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OW7pJdQFBN' 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2730614 00:20:25.708 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2730614 /var/tmp/bdevperf.sock 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2730614 ']' 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:25.709 10:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.709 [2024-05-15 10:39:41.508273] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:25.709 [2024-05-15 10:39:41.508401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730614 ] 00:20:25.967 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.967 [2024-05-15 10:39:41.626022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.967 [2024-05-15 10:39:41.723297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.533 10:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:26.533 10:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:26.533 10:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:26.533 [2024-05-15 10:39:42.317349] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.533 [2024-05-15 10:39:42.317457] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.533 TLSTESTn1 00:20:26.533 10:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.791 Running I/O for 10 seconds... 00:20:36.772 00:20:36.772 Latency(us) 00:20:36.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.772 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.772 Verification LBA range: start 0x0 length 0x2000 00:20:36.772 TLSTESTn1 : 10.01 5794.56 22.64 0.00 0.00 22056.65 5794.76 28973.81 00:20:36.772 =================================================================================================================== 00:20:36.772 Total : 5794.56 22.64 0.00 0.00 22056.65 5794.76 28973.81 00:20:36.772 0 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2730614 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2730614 ']' 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2730614 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2730614 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2730614' 00:20:36.772 killing process with pid 2730614 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2730614 00:20:36.772 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.772 00:20:36.772 Latency(us) 00:20:36.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.772 =================================================================================================================== 00:20:36.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.772 [2024-05-15 10:39:52.541247] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.772 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2730614 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.OW7pJdQFBN 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OW7pJdQFBN 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OW7pJdQFBN 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OW7pJdQFBN 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OW7pJdQFBN' 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2732858 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2732858 /var/tmp/bdevperf.sock 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2732858 ']' 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:37.341 10:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.341 [2024-05-15 10:39:52.983108] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:37.341 [2024-05-15 10:39:52.983205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732858 ] 00:20:37.341 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.341 [2024-05-15 10:39:53.073630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.341 [2024-05-15 10:39:53.170854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.906 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:37.906 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:37.906 10:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:38.165 [2024-05-15 10:39:53.851238] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.165 [2024-05-15 10:39:53.851291] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:38.165 [2024-05-15 10:39:53.851302] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.OW7pJdQFBN 00:20:38.165 request: 00:20:38.165 { 00:20:38.165 "name": "TLSTEST", 00:20:38.165 "trtype": "tcp", 00:20:38.165 "traddr": "10.0.0.2", 00:20:38.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.165 "adrfam": "ipv4", 00:20:38.165 "trsvcid": "4420", 00:20:38.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.165 "psk": "/tmp/tmp.OW7pJdQFBN", 00:20:38.165 "method": "bdev_nvme_attach_controller", 00:20:38.165 "req_id": 1 00:20:38.165 } 00:20:38.165 Got JSON-RPC error response 00:20:38.165 response: 00:20:38.165 { 00:20:38.165 "code": -1, 00:20:38.165 "message": "Operation not permitted" 00:20:38.165 } 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2732858 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2732858 ']' 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2732858 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2732858 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2732858' 00:20:38.165 killing process with pid 2732858 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2732858 00:20:38.165 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.165 00:20:38.165 Latency(us) 00:20:38.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.165 =================================================================================================================== 00:20:38.165 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.165 10:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2732858 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2730198 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2730198 ']' 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2730198 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2730198 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2730198' 00:20:38.424 killing process with pid 2730198 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2730198 00:20:38.424 [2024-05-15 10:39:54.297335] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:38.424 [2024-05-15 10:39:54.297397] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:38.424 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2730198 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2733177 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2733177 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2733177 ']' 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:38.993 10:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.250 [2024-05-15 10:39:54.895996] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:39.250 [2024-05-15 10:39:54.896109] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.250 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.250 [2024-05-15 10:39:54.988972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.250 [2024-05-15 10:39:55.079250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.250 [2024-05-15 10:39:55.079289] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.250 [2024-05-15 10:39:55.079300] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.251 [2024-05-15 10:39:55.079309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.251 [2024-05-15 10:39:55.079316] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.251 [2024-05-15 10:39:55.079348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.815 10:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.OW7pJdQFBN 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.OW7pJdQFBN 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.OW7pJdQFBN 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OW7pJdQFBN 00:20:39.816 10:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.075 [2024-05-15 10:39:55.760729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.075 10:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.075 10:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.335 [2024-05-15 10:39:56.028746] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:40.335 [2024-05-15 10:39:56.028850] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.335 [2024-05-15 10:39:56.029124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.335 10:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.335 malloc0 00:20:40.335 10:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.596 10:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:40.596 [2024-05-15 10:39:56.454600] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:40.596 [2024-05-15 10:39:56.454640] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:40.597 [2024-05-15 10:39:56.454662] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:40.597 request: 00:20:40.597 { 00:20:40.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.597 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.597 "psk": "/tmp/tmp.OW7pJdQFBN", 00:20:40.597 "method": "nvmf_subsystem_add_host", 00:20:40.597 "req_id": 1 00:20:40.597 } 00:20:40.597 Got JSON-RPC error response 00:20:40.597 response: 00:20:40.597 { 00:20:40.597 "code": -32603, 00:20:40.597 "message": "Internal error" 00:20:40.597 } 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2733177 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2733177 ']' 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2733177 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2733177 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2733177' 00:20:40.858 killing process with pid 2733177 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2733177 00:20:40.858 [2024-05-15 10:39:56.505588] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:40.858 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2733177 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.OW7pJdQFBN 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2733598 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2733598 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2733598 ']' 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:41.116 10:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.373 [2024-05-15 10:39:57.062081] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:41.373 [2024-05-15 10:39:57.062189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.373 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.373 [2024-05-15 10:39:57.182230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.631 [2024-05-15 10:39:57.281670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.631 [2024-05-15 10:39:57.281716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.631 [2024-05-15 10:39:57.281728] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.631 [2024-05-15 10:39:57.281739] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.631 [2024-05-15 10:39:57.281749] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.631 [2024-05-15 10:39:57.281787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.891 10:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:41.891 10:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:41.891 10:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.891 10:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:41.891 10:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.151 10:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.151 10:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.OW7pJdQFBN 00:20:42.151 10:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OW7pJdQFBN 00:20:42.151 10:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.151 [2024-05-15 10:39:57.909379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.151 10:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.412 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.412 [2024-05-15 10:39:58.213387] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:42.412 [2024-05-15 10:39:58.213472] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.412 [2024-05-15 10:39:58.213718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.412 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.672 malloc0 00:20:42.672 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:42.931 [2024-05-15 10:39:58.686014] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2734027 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2734027 /var/tmp/bdevperf.sock 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2734027 ']' 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:42.931 10:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.931 [2024-05-15 10:39:58.753245] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:42.931 [2024-05-15 10:39:58.753327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734027 ] 00:20:43.189 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.189 [2024-05-15 10:39:58.854283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.189 [2024-05-15 10:39:58.949433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.757 10:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:43.757 10:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:43.757 10:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:43.757 [2024-05-15 10:39:59.601771] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.757 [2024-05-15 10:39:59.601909] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.017 TLSTESTn1 00:20:44.017 10:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:20:44.277 10:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:44.277 "subsystems": [ 00:20:44.277 { 00:20:44.277 "subsystem": "keyring", 00:20:44.277 "config": [] 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "subsystem": "iobuf", 00:20:44.277 "config": [ 00:20:44.277 { 00:20:44.277 "method": "iobuf_set_options", 00:20:44.277 "params": { 00:20:44.277 "small_pool_count": 8192, 00:20:44.277 "large_pool_count": 1024, 00:20:44.277 "small_bufsize": 8192, 00:20:44.277 "large_bufsize": 135168 00:20:44.277 } 00:20:44.277 } 00:20:44.277 ] 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "subsystem": "sock", 00:20:44.277 "config": [ 00:20:44.277 { 00:20:44.277 "method": "sock_impl_set_options", 00:20:44.277 "params": { 00:20:44.277 "impl_name": "posix", 00:20:44.277 "recv_buf_size": 2097152, 00:20:44.277 "send_buf_size": 2097152, 00:20:44.277 "enable_recv_pipe": true, 00:20:44.277 "enable_quickack": false, 00:20:44.277 "enable_placement_id": 0, 00:20:44.277 "enable_zerocopy_send_server": true, 00:20:44.277 "enable_zerocopy_send_client": false, 00:20:44.277 "zerocopy_threshold": 0, 00:20:44.277 "tls_version": 0, 00:20:44.277 "enable_ktls": false 00:20:44.277 } 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "method": "sock_impl_set_options", 00:20:44.277 "params": { 00:20:44.277 "impl_name": "ssl", 00:20:44.277 "recv_buf_size": 4096, 00:20:44.277 "send_buf_size": 4096, 00:20:44.277 "enable_recv_pipe": true, 00:20:44.277 "enable_quickack": false, 00:20:44.277 "enable_placement_id": 0, 00:20:44.277 "enable_zerocopy_send_server": true, 00:20:44.277 "enable_zerocopy_send_client": false, 00:20:44.277 "zerocopy_threshold": 0, 00:20:44.277 "tls_version": 0, 00:20:44.277 "enable_ktls": false 00:20:44.277 } 00:20:44.277 } 00:20:44.277 ] 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "subsystem": "vmd", 00:20:44.277 "config": [] 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "subsystem": "accel", 00:20:44.277 "config": [ 00:20:44.277 { 00:20:44.277 "method": "accel_set_options", 00:20:44.277 "params": { 00:20:44.277 "small_cache_size": 128, 00:20:44.277 "large_cache_size": 16, 00:20:44.277 "task_count": 2048, 00:20:44.277 "sequence_count": 2048, 00:20:44.277 "buf_count": 2048 00:20:44.277 } 00:20:44.277 } 00:20:44.277 ] 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "subsystem": "bdev", 00:20:44.277 "config": [ 00:20:44.277 { 00:20:44.277 "method": "bdev_set_options", 00:20:44.277 "params": { 00:20:44.277 "bdev_io_pool_size": 65535, 00:20:44.277 "bdev_io_cache_size": 256, 00:20:44.277 "bdev_auto_examine": true, 00:20:44.277 "iobuf_small_cache_size": 128, 00:20:44.277 "iobuf_large_cache_size": 16 00:20:44.277 } 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "method": "bdev_raid_set_options", 00:20:44.277 "params": { 00:20:44.277 "process_window_size_kb": 1024 00:20:44.277 } 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "method": "bdev_iscsi_set_options", 00:20:44.277 "params": { 00:20:44.277 "timeout_sec": 30 00:20:44.277 } 00:20:44.277 }, 00:20:44.277 { 00:20:44.277 "method": "bdev_nvme_set_options", 00:20:44.277 "params": { 00:20:44.277 "action_on_timeout": "none", 00:20:44.277 "timeout_us": 0, 00:20:44.277 "timeout_admin_us": 0, 00:20:44.277 "keep_alive_timeout_ms": 10000, 00:20:44.277 "arbitration_burst": 0, 00:20:44.277 "low_priority_weight": 0, 00:20:44.277 "medium_priority_weight": 0, 00:20:44.277 "high_priority_weight": 0, 00:20:44.277 "nvme_adminq_poll_period_us": 10000, 00:20:44.277 "nvme_ioq_poll_period_us": 0, 00:20:44.277 "io_queue_requests": 0, 00:20:44.277 "delay_cmd_submit": true, 00:20:44.277 "transport_retry_count": 4, 00:20:44.277 "bdev_retry_count": 3, 00:20:44.277 "transport_ack_timeout": 0, 00:20:44.277 "ctrlr_loss_timeout_sec": 0, 00:20:44.278 "reconnect_delay_sec": 0, 00:20:44.278 "fast_io_fail_timeout_sec": 0, 00:20:44.278 "disable_auto_failback": false, 00:20:44.278 "generate_uuids": false, 00:20:44.278 "transport_tos": 0, 00:20:44.278 "nvme_error_stat": false, 00:20:44.278 "rdma_srq_size": 0, 00:20:44.278 "io_path_stat": false, 00:20:44.278 "allow_accel_sequence": false, 00:20:44.278 "rdma_max_cq_size": 0, 00:20:44.278 "rdma_cm_event_timeout_ms": 0, 00:20:44.278 "dhchap_digests": [ 00:20:44.278 "sha256", 00:20:44.278 "sha384", 00:20:44.278 "sha512" 00:20:44.278 ], 00:20:44.278 "dhchap_dhgroups": [ 00:20:44.278 "null", 00:20:44.278 "ffdhe2048", 00:20:44.278 "ffdhe3072", 00:20:44.278 "ffdhe4096", 00:20:44.278 "ffdhe6144", 00:20:44.278 "ffdhe8192" 00:20:44.278 ] 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "bdev_nvme_set_hotplug", 00:20:44.278 "params": { 00:20:44.278 "period_us": 100000, 00:20:44.278 "enable": false 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "bdev_malloc_create", 00:20:44.278 "params": { 00:20:44.278 "name": "malloc0", 00:20:44.278 "num_blocks": 8192, 00:20:44.278 "block_size": 4096, 00:20:44.278 "physical_block_size": 4096, 00:20:44.278 "uuid": "8e7d4cb2-c4d5-40c9-8e39-852770382aa0", 00:20:44.278 "optimal_io_boundary": 0 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "bdev_wait_for_examine" 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "nbd", 00:20:44.278 "config": [] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "scheduler", 00:20:44.278 "config": [ 00:20:44.278 { 00:20:44.278 "method": "framework_set_scheduler", 00:20:44.278 "params": { 00:20:44.278 "name": "static" 00:20:44.278 } 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "nvmf", 00:20:44.278 "config": [ 00:20:44.278 { 00:20:44.278 "method": "nvmf_set_config", 00:20:44.278 "params": { 00:20:44.278 "discovery_filter": "match_any", 00:20:44.278 "admin_cmd_passthru": { 00:20:44.278 "identify_ctrlr": false 00:20:44.278 } 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_set_max_subsystems", 00:20:44.278 "params": { 00:20:44.278 "max_subsystems": 1024 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_set_crdt", 00:20:44.278 "params": { 00:20:44.278 "crdt1": 0, 00:20:44.278 "crdt2": 0, 00:20:44.278 "crdt3": 0 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_create_transport", 00:20:44.278 "params": { 00:20:44.278 "trtype": "TCP", 00:20:44.278 "max_queue_depth": 128, 00:20:44.278 "max_io_qpairs_per_ctrlr": 127, 00:20:44.278 "in_capsule_data_size": 4096, 00:20:44.278 "max_io_size": 131072, 00:20:44.278 "io_unit_size": 131072, 00:20:44.278 "max_aq_depth": 128, 00:20:44.278 "num_shared_buffers": 511, 00:20:44.278 "buf_cache_size": 4294967295, 00:20:44.278 "dif_insert_or_strip": false, 00:20:44.278 "zcopy": false, 00:20:44.278 "c2h_success": false, 00:20:44.278 "sock_priority": 0, 00:20:44.278 "abort_timeout_sec": 1, 00:20:44.278 "ack_timeout": 0, 00:20:44.278 "data_wr_pool_size": 0 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_create_subsystem", 00:20:44.278 "params": { 00:20:44.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.278 "allow_any_host": false, 00:20:44.278 "serial_number": "SPDK00000000000001", 00:20:44.278 "model_number": "SPDK bdev Controller", 00:20:44.278 "max_namespaces": 10, 00:20:44.278 "min_cntlid": 1, 00:20:44.278 "max_cntlid": 65519, 00:20:44.278 "ana_reporting": false 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_subsystem_add_host", 00:20:44.278 "params": { 00:20:44.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.278 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.278 "psk": "/tmp/tmp.OW7pJdQFBN" 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_subsystem_add_ns", 00:20:44.278 "params": { 00:20:44.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.278 "namespace": { 00:20:44.278 "nsid": 1, 00:20:44.278 "bdev_name": "malloc0", 00:20:44.278 "nguid": "8E7D4CB2C4D540C98E39852770382AA0", 00:20:44.278 "uuid": "8e7d4cb2-c4d5-40c9-8e39-852770382aa0", 00:20:44.278 "no_auto_visible": false 00:20:44.278 } 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "nvmf_subsystem_add_listener", 00:20:44.278 "params": { 00:20:44.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.278 "listen_address": { 00:20:44.278 "trtype": "TCP", 00:20:44.278 "adrfam": "IPv4", 00:20:44.278 "traddr": "10.0.0.2", 00:20:44.278 "trsvcid": "4420" 00:20:44.278 }, 00:20:44.278 "secure_channel": true 00:20:44.278 } 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 }' 00:20:44.278 10:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:44.278 10:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:44.278 "subsystems": [ 00:20:44.278 { 00:20:44.278 "subsystem": "keyring", 00:20:44.278 "config": [] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "iobuf", 00:20:44.278 "config": [ 00:20:44.278 { 00:20:44.278 "method": "iobuf_set_options", 00:20:44.278 "params": { 00:20:44.278 "small_pool_count": 8192, 00:20:44.278 "large_pool_count": 1024, 00:20:44.278 "small_bufsize": 8192, 00:20:44.278 "large_bufsize": 135168 00:20:44.278 } 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "sock", 00:20:44.278 "config": [ 00:20:44.278 { 00:20:44.278 "method": "sock_impl_set_options", 00:20:44.278 "params": { 00:20:44.278 "impl_name": "posix", 00:20:44.278 "recv_buf_size": 2097152, 00:20:44.278 "send_buf_size": 2097152, 00:20:44.278 "enable_recv_pipe": true, 00:20:44.278 "enable_quickack": false, 00:20:44.278 "enable_placement_id": 0, 00:20:44.278 "enable_zerocopy_send_server": true, 00:20:44.278 "enable_zerocopy_send_client": false, 00:20:44.278 "zerocopy_threshold": 0, 00:20:44.278 "tls_version": 0, 00:20:44.278 "enable_ktls": false 00:20:44.278 } 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "method": "sock_impl_set_options", 00:20:44.278 "params": { 00:20:44.278 "impl_name": "ssl", 00:20:44.278 "recv_buf_size": 4096, 00:20:44.278 "send_buf_size": 4096, 00:20:44.278 "enable_recv_pipe": true, 00:20:44.278 "enable_quickack": false, 00:20:44.278 "enable_placement_id": 0, 00:20:44.278 "enable_zerocopy_send_server": true, 00:20:44.278 "enable_zerocopy_send_client": false, 00:20:44.278 "zerocopy_threshold": 0, 00:20:44.278 "tls_version": 0, 00:20:44.278 "enable_ktls": false 00:20:44.278 } 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "vmd", 00:20:44.278 "config": [] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "accel", 00:20:44.278 "config": [ 00:20:44.278 { 00:20:44.278 "method": "accel_set_options", 00:20:44.278 "params": { 00:20:44.278 "small_cache_size": 128, 00:20:44.278 "large_cache_size": 16, 00:20:44.278 "task_count": 2048, 00:20:44.278 "sequence_count": 2048, 00:20:44.278 "buf_count": 2048 00:20:44.278 } 00:20:44.278 } 00:20:44.278 ] 00:20:44.278 }, 00:20:44.278 { 00:20:44.278 "subsystem": "bdev", 00:20:44.278 "config": [ 00:20:44.278 { 00:20:44.279 "method": "bdev_set_options", 00:20:44.279 "params": { 00:20:44.279 "bdev_io_pool_size": 65535, 00:20:44.279 "bdev_io_cache_size": 256, 00:20:44.279 "bdev_auto_examine": true, 00:20:44.279 "iobuf_small_cache_size": 128, 00:20:44.279 "iobuf_large_cache_size": 16 00:20:44.279 } 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "method": "bdev_raid_set_options", 00:20:44.279 "params": { 00:20:44.279 "process_window_size_kb": 1024 00:20:44.279 } 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "method": "bdev_iscsi_set_options", 00:20:44.279 "params": { 00:20:44.279 "timeout_sec": 30 00:20:44.279 } 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "method": "bdev_nvme_set_options", 00:20:44.279 "params": { 00:20:44.279 "action_on_timeout": "none", 00:20:44.279 "timeout_us": 0, 00:20:44.279 "timeout_admin_us": 0, 00:20:44.279 "keep_alive_timeout_ms": 10000, 00:20:44.279 "arbitration_burst": 0, 00:20:44.279 "low_priority_weight": 0, 00:20:44.279 "medium_priority_weight": 0, 00:20:44.279 "high_priority_weight": 0, 00:20:44.279 "nvme_adminq_poll_period_us": 10000, 00:20:44.279 "nvme_ioq_poll_period_us": 0, 00:20:44.279 "io_queue_requests": 512, 00:20:44.279 "delay_cmd_submit": true, 00:20:44.279 "transport_retry_count": 4, 00:20:44.279 "bdev_retry_count": 3, 00:20:44.279 "transport_ack_timeout": 0, 00:20:44.279 "ctrlr_loss_timeout_sec": 0, 00:20:44.279 "reconnect_delay_sec": 0, 00:20:44.279 "fast_io_fail_timeout_sec": 0, 00:20:44.279 "disable_auto_failback": false, 00:20:44.279 "generate_uuids": false, 00:20:44.279 "transport_tos": 0, 00:20:44.279 "nvme_error_stat": false, 00:20:44.279 "rdma_srq_size": 0, 00:20:44.279 "io_path_stat": false, 00:20:44.279 "allow_accel_sequence": false, 00:20:44.279 "rdma_max_cq_size": 0, 00:20:44.279 "rdma_cm_event_timeout_ms": 0, 00:20:44.279 "dhchap_digests": [ 00:20:44.279 "sha256", 00:20:44.279 "sha384", 00:20:44.279 "sha512" 00:20:44.279 ], 00:20:44.279 "dhchap_dhgroups": [ 00:20:44.279 "null", 00:20:44.279 "ffdhe2048", 00:20:44.279 "ffdhe3072", 00:20:44.279 "ffdhe4096", 00:20:44.279 "ffdhe6144", 00:20:44.279 "ffdhe8192" 00:20:44.279 ] 00:20:44.279 } 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "method": "bdev_nvme_attach_controller", 00:20:44.279 "params": { 00:20:44.279 "name": "TLSTEST", 00:20:44.279 "trtype": "TCP", 00:20:44.279 "adrfam": "IPv4", 00:20:44.279 "traddr": "10.0.0.2", 00:20:44.279 "trsvcid": "4420", 00:20:44.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.279 "prchk_reftag": false, 00:20:44.279 "prchk_guard": false, 00:20:44.279 "ctrlr_loss_timeout_sec": 0, 00:20:44.279 "reconnect_delay_sec": 0, 00:20:44.279 "fast_io_fail_timeout_sec": 0, 00:20:44.279 "psk": "/tmp/tmp.OW7pJdQFBN", 00:20:44.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.279 "hdgst": false, 00:20:44.279 "ddgst": false 00:20:44.279 } 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "method": "bdev_nvme_set_hotplug", 00:20:44.279 "params": { 00:20:44.279 "period_us": 100000, 00:20:44.279 "enable": false 00:20:44.279 } 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "method": "bdev_wait_for_examine" 00:20:44.279 } 00:20:44.279 ] 00:20:44.279 }, 00:20:44.279 { 00:20:44.279 "subsystem": "nbd", 00:20:44.279 "config": [] 00:20:44.279 } 00:20:44.279 ] 00:20:44.279 }' 00:20:44.279 10:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2734027 00:20:44.279 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2734027 ']' 00:20:44.279 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2734027 00:20:44.279 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:44.279 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:44.279 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2734027 00:20:44.537 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:44.537 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:44.537 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2734027' 00:20:44.537 killing process with pid 2734027 00:20:44.537 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2734027 00:20:44.537 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.537 00:20:44.537 Latency(us) 00:20:44.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.537 =================================================================================================================== 00:20:44.537 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.537 [2024-05-15 10:40:00.163981] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.537 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2734027 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2733598 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2733598 ']' 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2733598 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2733598 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2733598' 00:20:44.795 killing process with pid 2733598 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2733598 00:20:44.795 [2024-05-15 10:40:00.569698] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:44.795 [2024-05-15 10:40:00.569753] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:44.795 10:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2733598 00:20:45.360 10:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:45.360 10:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.360 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:45.360 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.360 10:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:45.360 "subsystems": [ 00:20:45.360 { 00:20:45.360 "subsystem": "keyring", 00:20:45.360 "config": [] 00:20:45.360 }, 00:20:45.360 { 00:20:45.360 "subsystem": "iobuf", 00:20:45.360 "config": [ 00:20:45.360 { 00:20:45.360 "method": "iobuf_set_options", 00:20:45.360 "params": { 00:20:45.360 "small_pool_count": 8192, 00:20:45.360 "large_pool_count": 1024, 00:20:45.360 "small_bufsize": 8192, 00:20:45.360 "large_bufsize": 135168 00:20:45.360 } 00:20:45.360 } 00:20:45.360 ] 00:20:45.360 }, 00:20:45.360 { 00:20:45.360 "subsystem": "sock", 00:20:45.360 "config": [ 00:20:45.360 { 00:20:45.360 "method": "sock_impl_set_options", 00:20:45.360 "params": { 00:20:45.360 "impl_name": "posix", 00:20:45.360 "recv_buf_size": 2097152, 00:20:45.360 "send_buf_size": 2097152, 00:20:45.360 "enable_recv_pipe": true, 00:20:45.360 "enable_quickack": false, 00:20:45.360 "enable_placement_id": 0, 00:20:45.360 "enable_zerocopy_send_server": true, 00:20:45.360 "enable_zerocopy_send_client": false, 00:20:45.360 "zerocopy_threshold": 0, 00:20:45.360 "tls_version": 0, 00:20:45.360 "enable_ktls": false 00:20:45.360 } 00:20:45.360 }, 00:20:45.361 { 00:20:45.361 "method": "sock_impl_set_options", 00:20:45.361 "params": { 00:20:45.361 "impl_name": "ssl", 00:20:45.361 "recv_buf_size": 4096, 00:20:45.361 "send_buf_size": 4096, 00:20:45.361 "enable_recv_pipe": true, 00:20:45.361 "enable_quickack": false, 00:20:45.361 "enable_placement_id": 0, 00:20:45.361 "enable_zerocopy_send_server": true, 00:20:45.361 "enable_zerocopy_send_client": false, 00:20:45.361 "zerocopy_threshold": 0, 00:20:45.361 "tls_version": 0, 00:20:45.361 "enable_ktls": false 00:20:45.361 } 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "vmd", 00:20:45.361 "config": [] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "accel", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "accel_set_options", 00:20:45.361 "params": { 00:20:45.361 "small_cache_size": 128, 00:20:45.361 "large_cache_size": 16, 00:20:45.361 "task_count": 2048, 00:20:45.361 "sequence_count": 2048, 00:20:45.361 "buf_count": 2048 00:20:45.361 } 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "bdev", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "bdev_set_options", 00:20:45.361 "params": { 00:20:45.361 "bdev_io_pool_size": 65535, 00:20:45.361 "bdev_io_cache_size": 256, 00:20:45.361 "bdev_auto_examine": true, 00:20:45.361 "iobuf_small_cache_size": 128, 00:20:45.361 "iobuf_large_cache_size": 16 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_raid_set_options", 00:20:45.361 "params": { 00:20:45.361 "process_window_size_kb": 1024 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_iscsi_set_options", 00:20:45.361 "params": { 00:20:45.361 "timeout_sec": 30 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_nvme_set_options", 00:20:45.361 "params": { 00:20:45.361 "action_on_timeout": "none", 00:20:45.361 "timeout_us": 0, 00:20:45.361 "timeout_admin_us": 0, 00:20:45.361 "keep_alive_timeout_ms": 10000, 00:20:45.361 "arbitration_burst": 0, 00:20:45.361 "low_priority_weight": 0, 00:20:45.361 "medium_priority_weight": 0, 00:20:45.361 "high_priority_weight": 0, 00:20:45.361 "nvme_adminq_poll_period_us": 10000, 00:20:45.361 "nvme_ioq_poll_period_us": 0, 00:20:45.361 "io_queue_requests": 0, 00:20:45.361 "delay_cmd_submit": true, 00:20:45.361 "transport_retry_count": 4, 00:20:45.361 "bdev_retry_count": 3, 00:20:45.361 "transport_ack_timeout": 0, 00:20:45.361 "ctrlr_loss_timeout_sec": 0, 00:20:45.361 "reconnect_delay_sec": 0, 00:20:45.361 "fast_io_fail_timeout_sec": 0, 00:20:45.361 "disable_auto_failback": false, 00:20:45.361 "generate_uuids": false, 00:20:45.361 "transport_tos": 0, 00:20:45.361 "nvme_error_stat": false, 00:20:45.361 "rdma_srq_size": 0, 00:20:45.361 "io_path_stat": false, 00:20:45.361 "allow_accel_sequence": false, 00:20:45.361 "rdma_max_cq_size": 0, 00:20:45.361 "rdma_cm_event_timeout_ms": 0, 00:20:45.361 "dhchap_digests": [ 00:20:45.361 "sha256", 00:20:45.361 "sha384", 00:20:45.361 "sha512" 00:20:45.361 ], 00:20:45.361 "dhchap_dhgroups": [ 00:20:45.361 "null", 00:20:45.361 "ffdhe2048", 00:20:45.361 "ffdhe3072", 00:20:45.361 "ffdhe4096", 00:20:45.361 "ffdhe6144", 00:20:45.361 "ffdhe8192" 00:20:45.361 ] 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_nvme_set_hotplug", 00:20:45.361 "params": { 00:20:45.361 "period_us": 100000, 00:20:45.361 "enable": false 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_malloc_create", 00:20:45.361 "params": { 00:20:45.361 "name": "malloc0", 00:20:45.361 "num_blocks": 8192, 00:20:45.361 "block_size": 4096, 00:20:45.361 "physical_block_size": 4096, 00:20:45.361 "uuid": "8e7d4cb2-c4d5-40c9-8e39-852770382aa0", 00:20:45.361 "optimal_io_boundary": 0 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_wait_for_examine" 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "nbd", 00:20:45.361 "config": [] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "scheduler", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "framework_set_scheduler", 00:20:45.361 "params": { 00:20:45.361 "name": "static" 00:20:45.361 } 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "nvmf", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "nvmf_set_config", 00:20:45.361 "params": { 00:20:45.361 "discovery_filter": "match_any", 00:20:45.361 "admin_cmd_passthru": { 00:20:45.361 "identify_ctrlr": false 00:20:45.361 } 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_set_max_subsystems", 00:20:45.361 "params": { 00:20:45.361 "max_subsystems": 1024 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_set_crdt", 00:20:45.361 "params": { 00:20:45.361 "crdt1": 0, 00:20:45.361 "crdt2": 0, 00:20:45.361 "crdt3": 0 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_create_transport", 00:20:45.361 "params": { 00:20:45.361 "trtype": "TCP", 00:20:45.361 "max_queue_depth": 128, 00:20:45.361 "max_io_qpairs_per_ctrlr": 127, 00:20:45.361 "in_capsule_data_size": 4096, 00:20:45.361 "max_io_size": 131072, 00:20:45.361 "io_unit_size": 131072, 00:20:45.361 "max_aq_depth": 128, 00:20:45.361 "num_shared_buffers": 511, 00:20:45.361 "buf_cache_size": 4294967295, 00:20:45.361 "dif_insert_or_strip": false, 00:20:45.361 "zcopy": false, 00:20:45.361 "c2h_success": false, 00:20:45.361 "sock_priority": 0, 00:20:45.361 "abort_timeout_sec": 1, 00:20:45.361 "ack_timeout": 0, 00:20:45.361 "data_wr_pool_size": 0 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_create_subsystem", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "allow_any_host": false, 00:20:45.361 "serial_number": "SPDK00000000000001", 00:20:45.361 "model_number": "SPDK bdev Controller", 00:20:45.361 "max_namespaces": 10, 00:20:45.361 "min_cntlid": 1, 00:20:45.361 "max_cntlid": 65519, 00:20:45.361 "ana_reporting": false 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_subsystem_add_host", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.361 "psk": "/tmp/tmp.OW7pJdQFBN" 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_subsystem_add_ns", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "namespace": { 00:20:45.361 "nsid": 1, 00:20:45.361 "bdev_name": "malloc0", 00:20:45.361 "nguid": "8E7D4CB2C4D540C98E39852770382AA0", 00:20:45.361 "uuid": "8e7d4cb2-c4d5-40c9-8e39-852770382aa0", 00:20:45.361 "no_auto_visible": false 00:20:45.361 } 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_subsystem_add_listener", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "listen_address": { 00:20:45.361 "trtype": "TCP", 00:20:45.361 "adrfam": "IPv4", 00:20:45.362 "traddr": "10.0.0.2", 00:20:45.362 "trsvcid": "4420" 00:20:45.362 }, 00:20:45.362 "secure_channel": true 00:20:45.362 } 00:20:45.362 } 00:20:45.362 ] 00:20:45.362 } 00:20:45.362 ] 00:20:45.362 }' 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2734440 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2734440 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2734440 ']' 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:45.362 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.362 [2024-05-15 10:40:01.097970] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:45.362 [2024-05-15 10:40:01.098042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.362 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.362 [2024-05-15 10:40:01.187406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.620 [2024-05-15 10:40:01.284533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.620 [2024-05-15 10:40:01.284570] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.620 [2024-05-15 10:40:01.284579] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.620 [2024-05-15 10:40:01.284587] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.620 [2024-05-15 10:40:01.284594] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.620 [2024-05-15 10:40:01.284679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.879 [2024-05-15 10:40:01.569795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.879 [2024-05-15 10:40:01.585758] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:45.879 [2024-05-15 10:40:01.601728] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:45.879 [2024-05-15 10:40:01.601800] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.879 [2024-05-15 10:40:01.602027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2734628 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2734628 /var/tmp/bdevperf.sock 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2734628 ']' 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:46.137 10:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:46.137 "subsystems": [ 00:20:46.137 { 00:20:46.137 "subsystem": "keyring", 00:20:46.137 "config": [] 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "subsystem": "iobuf", 00:20:46.137 "config": [ 00:20:46.137 { 00:20:46.137 "method": "iobuf_set_options", 00:20:46.137 "params": { 00:20:46.137 "small_pool_count": 8192, 00:20:46.137 "large_pool_count": 1024, 00:20:46.137 "small_bufsize": 8192, 00:20:46.137 "large_bufsize": 135168 00:20:46.137 } 00:20:46.137 } 00:20:46.137 ] 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "subsystem": "sock", 00:20:46.137 "config": [ 00:20:46.137 { 00:20:46.137 "method": "sock_impl_set_options", 00:20:46.137 "params": { 00:20:46.137 "impl_name": "posix", 00:20:46.137 "recv_buf_size": 2097152, 00:20:46.137 "send_buf_size": 2097152, 00:20:46.137 "enable_recv_pipe": true, 00:20:46.137 "enable_quickack": false, 00:20:46.137 "enable_placement_id": 0, 00:20:46.137 "enable_zerocopy_send_server": true, 00:20:46.137 "enable_zerocopy_send_client": false, 00:20:46.137 "zerocopy_threshold": 0, 00:20:46.137 "tls_version": 0, 00:20:46.137 "enable_ktls": false 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "sock_impl_set_options", 00:20:46.137 "params": { 00:20:46.137 "impl_name": "ssl", 00:20:46.137 "recv_buf_size": 4096, 00:20:46.137 "send_buf_size": 4096, 00:20:46.137 "enable_recv_pipe": true, 00:20:46.137 "enable_quickack": false, 00:20:46.137 "enable_placement_id": 0, 00:20:46.137 "enable_zerocopy_send_server": true, 00:20:46.137 "enable_zerocopy_send_client": false, 00:20:46.137 "zerocopy_threshold": 0, 00:20:46.137 "tls_version": 0, 00:20:46.137 "enable_ktls": false 00:20:46.137 } 00:20:46.137 } 00:20:46.137 ] 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "subsystem": "vmd", 00:20:46.137 "config": [] 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "subsystem": "accel", 00:20:46.137 "config": [ 00:20:46.137 { 00:20:46.137 "method": "accel_set_options", 00:20:46.137 "params": { 00:20:46.137 "small_cache_size": 128, 00:20:46.137 "large_cache_size": 16, 00:20:46.137 "task_count": 2048, 00:20:46.137 "sequence_count": 2048, 00:20:46.137 "buf_count": 2048 00:20:46.137 } 00:20:46.137 } 00:20:46.137 ] 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "subsystem": "bdev", 00:20:46.137 "config": [ 00:20:46.137 { 00:20:46.137 "method": "bdev_set_options", 00:20:46.137 "params": { 00:20:46.137 "bdev_io_pool_size": 65535, 00:20:46.137 "bdev_io_cache_size": 256, 00:20:46.137 "bdev_auto_examine": true, 00:20:46.137 "iobuf_small_cache_size": 128, 00:20:46.137 "iobuf_large_cache_size": 16 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "bdev_raid_set_options", 00:20:46.137 "params": { 00:20:46.137 "process_window_size_kb": 1024 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "bdev_iscsi_set_options", 00:20:46.137 "params": { 00:20:46.137 "timeout_sec": 30 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "bdev_nvme_set_options", 00:20:46.137 "params": { 00:20:46.137 "action_on_timeout": "none", 00:20:46.137 "timeout_us": 0, 00:20:46.137 "timeout_admin_us": 0, 00:20:46.137 "keep_alive_timeout_ms": 10000, 00:20:46.137 "arbitration_burst": 0, 00:20:46.137 "low_priority_weight": 0, 00:20:46.137 "medium_priority_weight": 0, 00:20:46.137 "high_priority_weight": 0, 00:20:46.137 "nvme_adminq_poll_period_us": 10000, 00:20:46.137 "nvme_ioq_poll_period_us": 0, 00:20:46.137 "io_queue_requests": 512, 00:20:46.137 "delay_cmd_submit": true, 00:20:46.137 "transport_retry_count": 4, 00:20:46.137 "bdev_retry_count": 3, 00:20:46.137 "transport_ack_timeout": 0, 00:20:46.137 "ctrlr_loss_timeout_sec": 0, 00:20:46.137 "reconnect_delay_sec": 0, 00:20:46.137 "fast_io_fail_timeout_sec": 0, 00:20:46.137 "disable_auto_failback": false, 00:20:46.137 "generate_uuids": false, 00:20:46.137 "transport_tos": 0, 00:20:46.137 "nvme_error_stat": false, 00:20:46.137 "rdma_srq_size": 0, 00:20:46.137 "io_path_stat": false, 00:20:46.137 "allow_accel_sequence": false, 00:20:46.137 "rdma_max_cq_size": 0, 00:20:46.137 "rdma_cm_event_timeout_ms": 0, 00:20:46.137 "dhchap_digests": [ 00:20:46.137 "sha256", 00:20:46.137 "sha384", 00:20:46.137 "sha512" 00:20:46.137 ], 00:20:46.137 "dhchap_dhgroups": [ 00:20:46.137 "null", 00:20:46.137 "ffdhe2048", 00:20:46.137 "ffdhe3072", 00:20:46.137 "ffdhe4096", 00:20:46.137 "ffdhe6144", 00:20:46.137 "ffdhe8192" 00:20:46.137 ] 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "bdev_nvme_attach_controller", 00:20:46.137 "params": { 00:20:46.137 "name": "TLSTEST", 00:20:46.137 "trtype": "TCP", 00:20:46.137 "adrfam": "IPv4", 00:20:46.137 "traddr": "10.0.0.2", 00:20:46.137 "trsvcid": "4420", 00:20:46.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.137 "prchk_reftag": false, 00:20:46.137 "prchk_guard": false, 00:20:46.137 "ctrlr_loss_timeout_sec": 0, 00:20:46.137 "reconnect_delay_sec": 0, 00:20:46.137 "fast_io_fail_timeout_sec": 0, 00:20:46.137 "psk": "/tmp/tmp.OW7pJdQFBN", 00:20:46.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.137 "hdgst": false, 00:20:46.137 "ddgst": false 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "bdev_nvme_set_hotplug", 00:20:46.137 "params": { 00:20:46.137 "period_us": 100000, 00:20:46.137 "enable": false 00:20:46.137 } 00:20:46.137 }, 00:20:46.137 { 00:20:46.137 "method": "bdev_wait_for_examine" 00:20:46.137 } 00:20:46.138 ] 00:20:46.138 }, 00:20:46.138 { 00:20:46.138 "subsystem": "nbd", 00:20:46.138 "config": [] 00:20:46.138 } 00:20:46.138 ] 00:20:46.138 }' 00:20:46.138 [2024-05-15 10:40:01.921348] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:46.138 [2024-05-15 10:40:01.921464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734628 ] 00:20:46.138 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.396 [2024-05-15 10:40:02.036664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.396 [2024-05-15 10:40:02.132017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.653 [2024-05-15 10:40:02.330126] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.653 [2024-05-15 10:40:02.330231] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.910 10:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:46.910 10:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:46.910 10:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:46.910 Running I/O for 10 seconds... 00:20:56.936 00:20:56.936 Latency(us) 00:20:56.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.936 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:56.936 Verification LBA range: start 0x0 length 0x2000 00:20:56.936 TLSTESTn1 : 10.02 5536.83 21.63 0.00 0.00 23082.64 5691.28 40563.33 00:20:56.936 =================================================================================================================== 00:20:56.936 Total : 5536.83 21.63 0.00 0.00 23082.64 5691.28 40563.33 00:20:56.936 0 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2734628 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2734628 ']' 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2734628 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2734628 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2734628' 00:20:56.936 killing process with pid 2734628 00:20:56.936 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2734628 00:20:56.936 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.936 00:20:56.936 Latency(us) 00:20:56.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.936 =================================================================================================================== 00:20:56.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.936 [2024-05-15 10:40:12.774273] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.937 10:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2734628 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2734440 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2734440 ']' 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2734440 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2734440 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2734440' 00:20:57.507 killing process with pid 2734440 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2734440 00:20:57.507 [2024-05-15 10:40:13.190475] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:57.507 [2024-05-15 10:40:13.190555] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.507 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2734440 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2736827 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2736827 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2736827 ']' 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.075 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:58.076 10:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.076 [2024-05-15 10:40:13.787384] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:58.076 [2024-05-15 10:40:13.787508] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.076 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.076 [2024-05-15 10:40:13.911410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.332 [2024-05-15 10:40:14.010454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.332 [2024-05-15 10:40:14.010495] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.332 [2024-05-15 10:40:14.010505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.332 [2024-05-15 10:40:14.010514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.332 [2024-05-15 10:40:14.010521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.332 [2024-05-15 10:40:14.010549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.OW7pJdQFBN 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OW7pJdQFBN 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.899 [2024-05-15 10:40:14.626657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.899 10:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.158 10:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.158 [2024-05-15 10:40:14.898676] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:59.158 [2024-05-15 10:40:14.898787] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.158 [2024-05-15 10:40:14.899003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.158 10:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:59.416 malloc0 00:20:59.416 10:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:59.416 10:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OW7pJdQFBN 00:20:59.675 [2024-05-15 10:40:15.323068] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2737164 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2737164 /var/tmp/bdevperf.sock 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2737164 ']' 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:59.675 10:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.675 [2024-05-15 10:40:15.411196] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:20:59.675 [2024-05-15 10:40:15.411309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737164 ] 00:20:59.675 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.675 [2024-05-15 10:40:15.510833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.933 [2024-05-15 10:40:15.603527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.498 10:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:00.498 10:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:00.498 10:40:16 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OW7pJdQFBN 00:21:00.498 10:40:16 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:00.755 [2024-05-15 10:40:16.404279] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.755 nvme0n1 00:21:00.755 10:40:16 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.755 Running I/O for 1 seconds... 00:21:02.135 00:21:02.135 Latency(us) 00:21:02.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.135 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:02.135 Verification LBA range: start 0x0 length 0x2000 00:21:02.135 nvme0n1 : 1.03 5619.70 21.95 0.00 0.00 22473.26 5656.79 31733.22 00:21:02.135 =================================================================================================================== 00:21:02.135 Total : 5619.70 21.95 0.00 0.00 22473.26 5656.79 31733.22 00:21:02.135 0 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2737164 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2737164 ']' 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2737164 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2737164 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2737164' 00:21:02.135 killing process with pid 2737164 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2737164 00:21:02.135 Received shutdown signal, test time was about 1.000000 seconds 00:21:02.135 00:21:02.135 Latency(us) 00:21:02.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.135 =================================================================================================================== 00:21:02.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.135 10:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2737164 00:21:02.135 10:40:18 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2736827 00:21:02.135 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2736827 ']' 00:21:02.135 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2736827 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2736827 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2736827' 00:21:02.393 killing process with pid 2736827 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2736827 00:21:02.393 [2024-05-15 10:40:18.049018] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:02.393 [2024-05-15 10:40:18.049084] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:02.393 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2736827 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2737773 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2737773 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2737773 ']' 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.960 10:40:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:02.960 [2024-05-15 10:40:18.629110] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:02.960 [2024-05-15 10:40:18.629245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.960 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.960 [2024-05-15 10:40:18.769397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.219 [2024-05-15 10:40:18.869169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.219 [2024-05-15 10:40:18.869232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.219 [2024-05-15 10:40:18.869244] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.219 [2024-05-15 10:40:18.869253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.219 [2024-05-15 10:40:18.869261] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.219 [2024-05-15 10:40:18.869304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.480 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:03.480 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:03.480 10:40:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.480 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:03.480 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.739 [2024-05-15 10:40:19.378659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.739 malloc0 00:21:03.739 [2024-05-15 10:40:19.431288] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:03.739 [2024-05-15 10:40:19.431375] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.739 [2024-05-15 10:40:19.431625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2738069 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2738069 /var/tmp/bdevperf.sock 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2738069 ']' 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:03.739 10:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.739 [2024-05-15 10:40:19.512958] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:03.739 [2024-05-15 10:40:19.513033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738069 ] 00:21:03.739 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.739 [2024-05-15 10:40:19.601876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.996 [2024-05-15 10:40:19.695834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.563 10:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:04.563 10:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:04.563 10:40:20 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OW7pJdQFBN 00:21:04.563 10:40:20 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:04.824 [2024-05-15 10:40:20.486199] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.824 nvme0n1 00:21:04.824 10:40:20 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.824 Running I/O for 1 seconds... 00:21:06.201 00:21:06.201 Latency(us) 00:21:06.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.201 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:06.201 Verification LBA range: start 0x0 length 0x2000 00:21:06.201 nvme0n1 : 1.01 5925.78 23.15 0.00 0.00 21425.10 5622.30 24834.69 00:21:06.201 =================================================================================================================== 00:21:06.201 Total : 5925.78 23.15 0.00 0.00 21425.10 5622.30 24834.69 00:21:06.201 0 00:21:06.201 10:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:06.201 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.201 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.201 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.201 10:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:06.201 "subsystems": [ 00:21:06.201 { 00:21:06.201 "subsystem": "keyring", 00:21:06.201 "config": [ 00:21:06.201 { 00:21:06.201 "method": "keyring_file_add_key", 00:21:06.201 "params": { 00:21:06.201 "name": "key0", 00:21:06.201 "path": "/tmp/tmp.OW7pJdQFBN" 00:21:06.201 } 00:21:06.201 } 00:21:06.201 ] 00:21:06.201 }, 00:21:06.201 { 00:21:06.201 "subsystem": "iobuf", 00:21:06.201 "config": [ 00:21:06.201 { 00:21:06.201 "method": "iobuf_set_options", 00:21:06.201 "params": { 00:21:06.201 "small_pool_count": 8192, 00:21:06.201 "large_pool_count": 1024, 00:21:06.201 "small_bufsize": 8192, 00:21:06.201 "large_bufsize": 135168 00:21:06.201 } 00:21:06.201 } 00:21:06.201 ] 00:21:06.201 }, 00:21:06.201 { 00:21:06.201 "subsystem": "sock", 00:21:06.201 "config": [ 00:21:06.201 { 00:21:06.201 "method": "sock_impl_set_options", 00:21:06.201 "params": { 00:21:06.201 "impl_name": "posix", 00:21:06.201 "recv_buf_size": 2097152, 00:21:06.201 "send_buf_size": 2097152, 00:21:06.201 "enable_recv_pipe": true, 00:21:06.201 "enable_quickack": false, 00:21:06.201 "enable_placement_id": 0, 00:21:06.201 "enable_zerocopy_send_server": true, 00:21:06.201 "enable_zerocopy_send_client": false, 00:21:06.201 "zerocopy_threshold": 0, 00:21:06.201 "tls_version": 0, 00:21:06.202 "enable_ktls": false 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "sock_impl_set_options", 00:21:06.202 "params": { 00:21:06.202 "impl_name": "ssl", 00:21:06.202 "recv_buf_size": 4096, 00:21:06.202 "send_buf_size": 4096, 00:21:06.202 "enable_recv_pipe": true, 00:21:06.202 "enable_quickack": false, 00:21:06.202 "enable_placement_id": 0, 00:21:06.202 "enable_zerocopy_send_server": true, 00:21:06.202 "enable_zerocopy_send_client": false, 00:21:06.202 "zerocopy_threshold": 0, 00:21:06.202 "tls_version": 0, 00:21:06.202 "enable_ktls": false 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "vmd", 00:21:06.202 "config": [] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "accel", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "accel_set_options", 00:21:06.202 "params": { 00:21:06.202 "small_cache_size": 128, 00:21:06.202 "large_cache_size": 16, 00:21:06.202 "task_count": 2048, 00:21:06.202 "sequence_count": 2048, 00:21:06.202 "buf_count": 2048 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "bdev", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "bdev_set_options", 00:21:06.202 "params": { 00:21:06.202 "bdev_io_pool_size": 65535, 00:21:06.202 "bdev_io_cache_size": 256, 00:21:06.202 "bdev_auto_examine": true, 00:21:06.202 "iobuf_small_cache_size": 128, 00:21:06.202 "iobuf_large_cache_size": 16 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_raid_set_options", 00:21:06.202 "params": { 00:21:06.202 "process_window_size_kb": 1024 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_iscsi_set_options", 00:21:06.202 "params": { 00:21:06.202 "timeout_sec": 30 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_nvme_set_options", 00:21:06.202 "params": { 00:21:06.202 "action_on_timeout": "none", 00:21:06.202 "timeout_us": 0, 00:21:06.202 "timeout_admin_us": 0, 00:21:06.202 "keep_alive_timeout_ms": 10000, 00:21:06.202 "arbitration_burst": 0, 00:21:06.202 "low_priority_weight": 0, 00:21:06.202 "medium_priority_weight": 0, 00:21:06.202 "high_priority_weight": 0, 00:21:06.202 "nvme_adminq_poll_period_us": 10000, 00:21:06.202 "nvme_ioq_poll_period_us": 0, 00:21:06.202 "io_queue_requests": 0, 00:21:06.202 "delay_cmd_submit": true, 00:21:06.202 "transport_retry_count": 4, 00:21:06.202 "bdev_retry_count": 3, 00:21:06.202 "transport_ack_timeout": 0, 00:21:06.202 "ctrlr_loss_timeout_sec": 0, 00:21:06.202 "reconnect_delay_sec": 0, 00:21:06.202 "fast_io_fail_timeout_sec": 0, 00:21:06.202 "disable_auto_failback": false, 00:21:06.202 "generate_uuids": false, 00:21:06.202 "transport_tos": 0, 00:21:06.202 "nvme_error_stat": false, 00:21:06.202 "rdma_srq_size": 0, 00:21:06.202 "io_path_stat": false, 00:21:06.202 "allow_accel_sequence": false, 00:21:06.202 "rdma_max_cq_size": 0, 00:21:06.202 "rdma_cm_event_timeout_ms": 0, 00:21:06.202 "dhchap_digests": [ 00:21:06.202 "sha256", 00:21:06.202 "sha384", 00:21:06.202 "sha512" 00:21:06.202 ], 00:21:06.202 "dhchap_dhgroups": [ 00:21:06.202 "null", 00:21:06.202 "ffdhe2048", 00:21:06.202 "ffdhe3072", 00:21:06.202 "ffdhe4096", 00:21:06.202 "ffdhe6144", 00:21:06.202 "ffdhe8192" 00:21:06.202 ] 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_nvme_set_hotplug", 00:21:06.202 "params": { 00:21:06.202 "period_us": 100000, 00:21:06.202 "enable": false 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_malloc_create", 00:21:06.202 "params": { 00:21:06.202 "name": "malloc0", 00:21:06.202 "num_blocks": 8192, 00:21:06.202 "block_size": 4096, 00:21:06.202 "physical_block_size": 4096, 00:21:06.202 "uuid": "d32de059-117b-4bcc-9580-cc4e83a4c938", 00:21:06.202 "optimal_io_boundary": 0 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_wait_for_examine" 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "nbd", 00:21:06.202 "config": [] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "scheduler", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "framework_set_scheduler", 00:21:06.202 "params": { 00:21:06.202 "name": "static" 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "nvmf", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "nvmf_set_config", 00:21:06.202 "params": { 00:21:06.202 "discovery_filter": "match_any", 00:21:06.202 "admin_cmd_passthru": { 00:21:06.202 "identify_ctrlr": false 00:21:06.202 } 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_set_max_subsystems", 00:21:06.202 "params": { 00:21:06.202 "max_subsystems": 1024 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_set_crdt", 00:21:06.202 "params": { 00:21:06.202 "crdt1": 0, 00:21:06.202 "crdt2": 0, 00:21:06.202 "crdt3": 0 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_create_transport", 00:21:06.202 "params": { 00:21:06.202 "trtype": "TCP", 00:21:06.202 "max_queue_depth": 128, 00:21:06.202 "max_io_qpairs_per_ctrlr": 127, 00:21:06.202 "in_capsule_data_size": 4096, 00:21:06.202 "max_io_size": 131072, 00:21:06.202 "io_unit_size": 131072, 00:21:06.202 "max_aq_depth": 128, 00:21:06.202 "num_shared_buffers": 511, 00:21:06.202 "buf_cache_size": 4294967295, 00:21:06.202 "dif_insert_or_strip": false, 00:21:06.202 "zcopy": false, 00:21:06.202 "c2h_success": false, 00:21:06.202 "sock_priority": 0, 00:21:06.202 "abort_timeout_sec": 1, 00:21:06.202 "ack_timeout": 0, 00:21:06.202 "data_wr_pool_size": 0 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_create_subsystem", 00:21:06.202 "params": { 00:21:06.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.202 "allow_any_host": false, 00:21:06.202 "serial_number": "00000000000000000000", 00:21:06.202 "model_number": "SPDK bdev Controller", 00:21:06.202 "max_namespaces": 32, 00:21:06.202 "min_cntlid": 1, 00:21:06.202 "max_cntlid": 65519, 00:21:06.202 "ana_reporting": false 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_subsystem_add_host", 00:21:06.202 "params": { 00:21:06.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.202 "host": "nqn.2016-06.io.spdk:host1", 00:21:06.202 "psk": "key0" 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_subsystem_add_ns", 00:21:06.202 "params": { 00:21:06.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.202 "namespace": { 00:21:06.202 "nsid": 1, 00:21:06.202 "bdev_name": "malloc0", 00:21:06.202 "nguid": "D32DE059117B4BCC9580CC4E83A4C938", 00:21:06.202 "uuid": "d32de059-117b-4bcc-9580-cc4e83a4c938", 00:21:06.202 "no_auto_visible": false 00:21:06.202 } 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "nvmf_subsystem_add_listener", 00:21:06.202 "params": { 00:21:06.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.202 "listen_address": { 00:21:06.202 "trtype": "TCP", 00:21:06.202 "adrfam": "IPv4", 00:21:06.202 "traddr": "10.0.0.2", 00:21:06.202 "trsvcid": "4420" 00:21:06.202 }, 00:21:06.202 "secure_channel": true 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }' 00:21:06.202 10:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:06.202 10:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:06.202 "subsystems": [ 00:21:06.202 { 00:21:06.202 "subsystem": "keyring", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "keyring_file_add_key", 00:21:06.202 "params": { 00:21:06.202 "name": "key0", 00:21:06.202 "path": "/tmp/tmp.OW7pJdQFBN" 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "iobuf", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "iobuf_set_options", 00:21:06.202 "params": { 00:21:06.202 "small_pool_count": 8192, 00:21:06.202 "large_pool_count": 1024, 00:21:06.202 "small_bufsize": 8192, 00:21:06.202 "large_bufsize": 135168 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "sock", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "sock_impl_set_options", 00:21:06.202 "params": { 00:21:06.202 "impl_name": "posix", 00:21:06.202 "recv_buf_size": 2097152, 00:21:06.202 "send_buf_size": 2097152, 00:21:06.202 "enable_recv_pipe": true, 00:21:06.202 "enable_quickack": false, 00:21:06.202 "enable_placement_id": 0, 00:21:06.202 "enable_zerocopy_send_server": true, 00:21:06.202 "enable_zerocopy_send_client": false, 00:21:06.202 "zerocopy_threshold": 0, 00:21:06.202 "tls_version": 0, 00:21:06.202 "enable_ktls": false 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "sock_impl_set_options", 00:21:06.202 "params": { 00:21:06.202 "impl_name": "ssl", 00:21:06.202 "recv_buf_size": 4096, 00:21:06.202 "send_buf_size": 4096, 00:21:06.202 "enable_recv_pipe": true, 00:21:06.202 "enable_quickack": false, 00:21:06.202 "enable_placement_id": 0, 00:21:06.202 "enable_zerocopy_send_server": true, 00:21:06.202 "enable_zerocopy_send_client": false, 00:21:06.202 "zerocopy_threshold": 0, 00:21:06.202 "tls_version": 0, 00:21:06.202 "enable_ktls": false 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "vmd", 00:21:06.202 "config": [] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "accel", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "accel_set_options", 00:21:06.202 "params": { 00:21:06.202 "small_cache_size": 128, 00:21:06.202 "large_cache_size": 16, 00:21:06.202 "task_count": 2048, 00:21:06.202 "sequence_count": 2048, 00:21:06.202 "buf_count": 2048 00:21:06.202 } 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "subsystem": "bdev", 00:21:06.202 "config": [ 00:21:06.202 { 00:21:06.202 "method": "bdev_set_options", 00:21:06.202 "params": { 00:21:06.202 "bdev_io_pool_size": 65535, 00:21:06.202 "bdev_io_cache_size": 256, 00:21:06.202 "bdev_auto_examine": true, 00:21:06.202 "iobuf_small_cache_size": 128, 00:21:06.202 "iobuf_large_cache_size": 16 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_raid_set_options", 00:21:06.202 "params": { 00:21:06.202 "process_window_size_kb": 1024 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_iscsi_set_options", 00:21:06.202 "params": { 00:21:06.202 "timeout_sec": 30 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_nvme_set_options", 00:21:06.202 "params": { 00:21:06.202 "action_on_timeout": "none", 00:21:06.202 "timeout_us": 0, 00:21:06.202 "timeout_admin_us": 0, 00:21:06.202 "keep_alive_timeout_ms": 10000, 00:21:06.202 "arbitration_burst": 0, 00:21:06.202 "low_priority_weight": 0, 00:21:06.202 "medium_priority_weight": 0, 00:21:06.202 "high_priority_weight": 0, 00:21:06.202 "nvme_adminq_poll_period_us": 10000, 00:21:06.202 "nvme_ioq_poll_period_us": 0, 00:21:06.202 "io_queue_requests": 512, 00:21:06.202 "delay_cmd_submit": true, 00:21:06.202 "transport_retry_count": 4, 00:21:06.202 "bdev_retry_count": 3, 00:21:06.202 "transport_ack_timeout": 0, 00:21:06.202 "ctrlr_loss_timeout_sec": 0, 00:21:06.202 "reconnect_delay_sec": 0, 00:21:06.202 "fast_io_fail_timeout_sec": 0, 00:21:06.202 "disable_auto_failback": false, 00:21:06.202 "generate_uuids": false, 00:21:06.202 "transport_tos": 0, 00:21:06.202 "nvme_error_stat": false, 00:21:06.202 "rdma_srq_size": 0, 00:21:06.202 "io_path_stat": false, 00:21:06.202 "allow_accel_sequence": false, 00:21:06.202 "rdma_max_cq_size": 0, 00:21:06.202 "rdma_cm_event_timeout_ms": 0, 00:21:06.202 "dhchap_digests": [ 00:21:06.202 "sha256", 00:21:06.202 "sha384", 00:21:06.202 "sha512" 00:21:06.202 ], 00:21:06.202 "dhchap_dhgroups": [ 00:21:06.202 "null", 00:21:06.202 "ffdhe2048", 00:21:06.202 "ffdhe3072", 00:21:06.202 "ffdhe4096", 00:21:06.202 "ffdhe6144", 00:21:06.202 "ffdhe8192" 00:21:06.202 ] 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "method": "bdev_nvme_attach_controller", 00:21:06.202 "params": { 00:21:06.202 "name": "nvme0", 00:21:06.202 "trtype": "TCP", 00:21:06.202 "adrfam": "IPv4", 00:21:06.202 "traddr": "10.0.0.2", 00:21:06.202 "trsvcid": "4420", 00:21:06.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.202 "prchk_reftag": false, 00:21:06.202 "prchk_guard": false, 00:21:06.202 "ctrlr_loss_timeout_sec": 0, 00:21:06.202 "reconnect_delay_sec": 0, 00:21:06.202 "fast_io_fail_timeout_sec": 0, 00:21:06.202 "psk": "key0", 00:21:06.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.202 "hdgst": false, 00:21:06.202 "ddgst": false 00:21:06.202 } 00:21:06.202 }, 00:21:06.202 { 00:21:06.203 "method": "bdev_nvme_set_hotplug", 00:21:06.203 "params": { 00:21:06.203 "period_us": 100000, 00:21:06.203 "enable": false 00:21:06.203 } 00:21:06.203 }, 00:21:06.203 { 00:21:06.203 "method": "bdev_enable_histogram", 00:21:06.203 "params": { 00:21:06.203 "name": "nvme0n1", 00:21:06.203 "enable": true 00:21:06.203 } 00:21:06.203 }, 00:21:06.203 { 00:21:06.203 "method": "bdev_wait_for_examine" 00:21:06.203 } 00:21:06.203 ] 00:21:06.203 }, 00:21:06.203 { 00:21:06.203 "subsystem": "nbd", 00:21:06.203 "config": [] 00:21:06.203 } 00:21:06.203 ] 00:21:06.203 }' 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2738069 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2738069 ']' 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2738069 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2738069 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2738069' 00:21:06.203 killing process with pid 2738069 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2738069 00:21:06.203 Received shutdown signal, test time was about 1.000000 seconds 00:21:06.203 00:21:06.203 Latency(us) 00:21:06.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.203 =================================================================================================================== 00:21:06.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.203 10:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2738069 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2737773 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2737773 ']' 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2737773 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2737773 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2737773' 00:21:06.864 killing process with pid 2737773 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2737773 00:21:06.864 [2024-05-15 10:40:22.398050] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:06.864 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2737773 00:21:07.123 10:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:07.123 10:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.123 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:07.123 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.123 10:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:07.123 "subsystems": [ 00:21:07.123 { 00:21:07.123 "subsystem": "keyring", 00:21:07.123 "config": [ 00:21:07.123 { 00:21:07.123 "method": "keyring_file_add_key", 00:21:07.123 "params": { 00:21:07.123 "name": "key0", 00:21:07.123 "path": "/tmp/tmp.OW7pJdQFBN" 00:21:07.123 } 00:21:07.123 } 00:21:07.123 ] 00:21:07.123 }, 00:21:07.123 { 00:21:07.123 "subsystem": "iobuf", 00:21:07.123 "config": [ 00:21:07.123 { 00:21:07.123 "method": "iobuf_set_options", 00:21:07.123 "params": { 00:21:07.123 "small_pool_count": 8192, 00:21:07.123 "large_pool_count": 1024, 00:21:07.123 "small_bufsize": 8192, 00:21:07.123 "large_bufsize": 135168 00:21:07.123 } 00:21:07.123 } 00:21:07.123 ] 00:21:07.123 }, 00:21:07.123 { 00:21:07.123 "subsystem": "sock", 00:21:07.123 "config": [ 00:21:07.123 { 00:21:07.123 "method": "sock_impl_set_options", 00:21:07.123 "params": { 00:21:07.123 "impl_name": "posix", 00:21:07.123 "recv_buf_size": 2097152, 00:21:07.123 "send_buf_size": 2097152, 00:21:07.123 "enable_recv_pipe": true, 00:21:07.123 "enable_quickack": false, 00:21:07.123 "enable_placement_id": 0, 00:21:07.123 "enable_zerocopy_send_server": true, 00:21:07.123 "enable_zerocopy_send_client": false, 00:21:07.123 "zerocopy_threshold": 0, 00:21:07.123 "tls_version": 0, 00:21:07.123 "enable_ktls": false 00:21:07.123 } 00:21:07.123 }, 00:21:07.123 { 00:21:07.123 "method": "sock_impl_set_options", 00:21:07.123 "params": { 00:21:07.123 "impl_name": "ssl", 00:21:07.123 "recv_buf_size": 4096, 00:21:07.123 "send_buf_size": 4096, 00:21:07.123 "enable_recv_pipe": true, 00:21:07.123 "enable_quickack": false, 00:21:07.123 "enable_placement_id": 0, 00:21:07.123 "enable_zerocopy_send_server": true, 00:21:07.123 "enable_zerocopy_send_client": false, 00:21:07.123 "zerocopy_threshold": 0, 00:21:07.123 "tls_version": 0, 00:21:07.123 "enable_ktls": false 00:21:07.123 } 00:21:07.123 } 00:21:07.123 ] 00:21:07.123 }, 00:21:07.123 { 00:21:07.123 "subsystem": "vmd", 00:21:07.123 "config": [] 00:21:07.123 }, 00:21:07.123 { 00:21:07.123 "subsystem": "accel", 00:21:07.123 "config": [ 00:21:07.123 { 00:21:07.123 "method": "accel_set_options", 00:21:07.123 "params": { 00:21:07.123 "small_cache_size": 128, 00:21:07.123 "large_cache_size": 16, 00:21:07.123 "task_count": 2048, 00:21:07.123 "sequence_count": 2048, 00:21:07.123 "buf_count": 2048 00:21:07.123 } 00:21:07.123 } 00:21:07.123 ] 00:21:07.123 }, 00:21:07.124 { 00:21:07.124 "subsystem": "bdev", 00:21:07.124 "config": [ 00:21:07.124 { 00:21:07.124 "method": "bdev_set_options", 00:21:07.124 "params": { 00:21:07.124 "bdev_io_pool_size": 65535, 00:21:07.124 "bdev_io_cache_size": 256, 00:21:07.124 "bdev_auto_examine": true, 00:21:07.124 "iobuf_small_cache_size": 128, 00:21:07.124 "iobuf_large_cache_size": 16 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "bdev_raid_set_options", 00:21:07.124 "params": { 00:21:07.124 "process_window_size_kb": 1024 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "bdev_iscsi_set_options", 00:21:07.124 "params": { 00:21:07.124 "timeout_sec": 30 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "bdev_nvme_set_options", 00:21:07.124 "params": { 00:21:07.124 "action_on_timeout": "none", 00:21:07.124 "timeout_us": 0, 00:21:07.124 "timeout_admin_us": 0, 00:21:07.124 "keep_alive_timeout_ms": 10000, 00:21:07.124 "arbitration_burst": 0, 00:21:07.124 "low_priority_weight": 0, 00:21:07.124 "medium_priority_weight": 0, 00:21:07.124 "high_priority_weight": 0, 00:21:07.124 "nvme_adminq_poll_period_us": 10000, 00:21:07.124 "nvme_ioq_poll_period_us": 0, 00:21:07.124 "io_queue_requests": 0, 00:21:07.124 "delay_cmd_submit": true, 00:21:07.124 "transport_retry_count": 4, 00:21:07.124 "bdev_retry_count": 3, 00:21:07.124 "transport_ack_timeout": 0, 00:21:07.124 "ctrlr_loss_timeout_sec": 0, 00:21:07.124 "reconnect_delay_sec": 0, 00:21:07.124 "fast_io_fail_timeout_sec": 0, 00:21:07.124 "disable_auto_failback": false, 00:21:07.124 "generate_uuids": false, 00:21:07.124 "transport_tos": 0, 00:21:07.124 "nvme_error_stat": false, 00:21:07.124 "rdma_srq_size": 0, 00:21:07.124 "io_path_stat": false, 00:21:07.124 "allow_accel_sequence": false, 00:21:07.124 "rdma_max_cq_size": 0, 00:21:07.124 "rdma_cm_event_timeout_ms": 0, 00:21:07.124 "dhchap_digests": [ 00:21:07.124 "sha256", 00:21:07.124 "sha384", 00:21:07.124 "sha512" 00:21:07.124 ], 00:21:07.124 "dhchap_dhgroups": [ 00:21:07.124 "null", 00:21:07.124 "ffdhe2048", 00:21:07.124 "ffdhe3072", 00:21:07.124 "ffdhe4096", 00:21:07.124 "ffdhe6144", 00:21:07.124 "ffdhe8192" 00:21:07.124 ] 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "bdev_nvme_set_hotplug", 00:21:07.124 "params": { 00:21:07.124 "period_us": 100000, 00:21:07.124 "enable": false 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "bdev_malloc_create", 00:21:07.124 "params": { 00:21:07.124 "name": "malloc0", 00:21:07.124 "num_blocks": 8192, 00:21:07.124 "block_size": 4096, 00:21:07.124 "physical_block_size": 4096, 00:21:07.124 "uuid": "d32de059-117b-4bcc-9580-cc4e83a4c938", 00:21:07.124 "optimal_io_boundary": 0 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "bdev_wait_for_examine" 00:21:07.124 } 00:21:07.124 ] 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "subsystem": "nbd", 00:21:07.124 "config": [] 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "subsystem": "scheduler", 00:21:07.124 "config": [ 00:21:07.124 { 00:21:07.124 "method": "framework_set_scheduler", 00:21:07.124 "params": { 00:21:07.124 "name": "static" 00:21:07.124 } 00:21:07.124 } 00:21:07.124 ] 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "subsystem": "nvmf", 00:21:07.124 "config": [ 00:21:07.124 { 00:21:07.124 "method": "nvmf_set_config", 00:21:07.124 "params": { 00:21:07.124 "discovery_filter": "match_any", 00:21:07.124 "admin_cmd_passthru": { 00:21:07.124 "identify_ctrlr": false 00:21:07.124 } 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_set_max_subsystems", 00:21:07.124 "params": { 00:21:07.124 "max_subsystems": 1024 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_set_crdt", 00:21:07.124 "params": { 00:21:07.124 "crdt1": 0, 00:21:07.124 "crdt2": 0, 00:21:07.124 "crdt3": 0 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_create_transport", 00:21:07.124 "params": { 00:21:07.124 "trtype": "TCP", 00:21:07.124 "max_queue_depth": 128, 00:21:07.124 "max_io_qpairs_per_ctrlr": 127, 00:21:07.124 "in_capsule_data_size": 4096, 00:21:07.124 "max_io_size": 131072, 00:21:07.124 "io_unit_size": 131072, 00:21:07.124 "max_aq_depth": 128, 00:21:07.124 "num_shared_buffers": 511, 00:21:07.124 "buf_cache_size": 4294967295, 00:21:07.124 "dif_insert_or_strip": false, 00:21:07.124 "zcopy": false, 00:21:07.124 "c2h_success": false, 00:21:07.124 "sock_priority": 0, 00:21:07.124 "abort_timeout_sec": 1, 00:21:07.124 "ack_timeout": 0, 00:21:07.124 "data_wr_pool_size": 0 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_create_subsystem", 00:21:07.124 "params": { 00:21:07.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.124 "allow_any_host": false, 00:21:07.124 "serial_number": "00000000000000000000", 00:21:07.124 "model_number": "SPDK bdev Controller", 00:21:07.124 "max_namespaces": 32, 00:21:07.124 "min_cntlid": 1, 00:21:07.124 "max_cntlid": 65519, 00:21:07.124 "ana_reporting": false 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_subsystem_add_host", 00:21:07.124 "params": { 00:21:07.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.124 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.124 "psk": "key0" 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_subsystem_add_ns", 00:21:07.124 "params": { 00:21:07.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.124 "namespace": { 00:21:07.124 "nsid": 1, 00:21:07.124 "bdev_name": "malloc0", 00:21:07.124 "nguid": "D32DE059117B4BCC9580CC4E83A4C938", 00:21:07.124 "uuid": "d32de059-117b-4bcc-9580-cc4e83a4c938", 00:21:07.124 "no_auto_visible": false 00:21:07.124 } 00:21:07.124 } 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "method": "nvmf_subsystem_add_listener", 00:21:07.124 "params": { 00:21:07.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.124 "listen_address": { 00:21:07.124 "trtype": "TCP", 00:21:07.124 "adrfam": "IPv4", 00:21:07.124 "traddr": "10.0.0.2", 00:21:07.124 "trsvcid": "4420" 00:21:07.124 }, 00:21:07.124 "secure_channel": true 00:21:07.124 } 00:21:07.124 } 00:21:07.124 ] 00:21:07.124 } 00:21:07.124 ] 00:21:07.124 }' 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2738683 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2738683 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2738683 ']' 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:07.124 10:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.124 [2024-05-15 10:40:22.975187] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:07.124 [2024-05-15 10:40:22.975311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.381 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.381 [2024-05-15 10:40:23.099832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.381 [2024-05-15 10:40:23.200208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.381 [2024-05-15 10:40:23.200250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.381 [2024-05-15 10:40:23.200263] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.381 [2024-05-15 10:40:23.200273] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.381 [2024-05-15 10:40:23.200280] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.381 [2024-05-15 10:40:23.200381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.946 [2024-05-15 10:40:23.531413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.946 [2024-05-15 10:40:23.563310] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:07.946 [2024-05-15 10:40:23.563382] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.946 [2024-05-15 10:40:23.563574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2738989 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2738989 /var/tmp/bdevperf.sock 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2738989 ']' 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:07.946 10:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:07.946 "subsystems": [ 00:21:07.946 { 00:21:07.946 "subsystem": "keyring", 00:21:07.946 "config": [ 00:21:07.946 { 00:21:07.946 "method": "keyring_file_add_key", 00:21:07.946 "params": { 00:21:07.946 "name": "key0", 00:21:07.946 "path": "/tmp/tmp.OW7pJdQFBN" 00:21:07.946 } 00:21:07.946 } 00:21:07.946 ] 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "subsystem": "iobuf", 00:21:07.946 "config": [ 00:21:07.946 { 00:21:07.946 "method": "iobuf_set_options", 00:21:07.946 "params": { 00:21:07.946 "small_pool_count": 8192, 00:21:07.946 "large_pool_count": 1024, 00:21:07.946 "small_bufsize": 8192, 00:21:07.946 "large_bufsize": 135168 00:21:07.946 } 00:21:07.946 } 00:21:07.946 ] 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "subsystem": "sock", 00:21:07.946 "config": [ 00:21:07.946 { 00:21:07.946 "method": "sock_impl_set_options", 00:21:07.946 "params": { 00:21:07.946 "impl_name": "posix", 00:21:07.946 "recv_buf_size": 2097152, 00:21:07.946 "send_buf_size": 2097152, 00:21:07.946 "enable_recv_pipe": true, 00:21:07.946 "enable_quickack": false, 00:21:07.946 "enable_placement_id": 0, 00:21:07.946 "enable_zerocopy_send_server": true, 00:21:07.946 "enable_zerocopy_send_client": false, 00:21:07.946 "zerocopy_threshold": 0, 00:21:07.946 "tls_version": 0, 00:21:07.946 "enable_ktls": false 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "sock_impl_set_options", 00:21:07.946 "params": { 00:21:07.946 "impl_name": "ssl", 00:21:07.946 "recv_buf_size": 4096, 00:21:07.946 "send_buf_size": 4096, 00:21:07.946 "enable_recv_pipe": true, 00:21:07.946 "enable_quickack": false, 00:21:07.946 "enable_placement_id": 0, 00:21:07.946 "enable_zerocopy_send_server": true, 00:21:07.946 "enable_zerocopy_send_client": false, 00:21:07.946 "zerocopy_threshold": 0, 00:21:07.946 "tls_version": 0, 00:21:07.946 "enable_ktls": false 00:21:07.946 } 00:21:07.946 } 00:21:07.946 ] 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "subsystem": "vmd", 00:21:07.946 "config": [] 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "subsystem": "accel", 00:21:07.946 "config": [ 00:21:07.946 { 00:21:07.946 "method": "accel_set_options", 00:21:07.946 "params": { 00:21:07.946 "small_cache_size": 128, 00:21:07.946 "large_cache_size": 16, 00:21:07.946 "task_count": 2048, 00:21:07.946 "sequence_count": 2048, 00:21:07.946 "buf_count": 2048 00:21:07.946 } 00:21:07.946 } 00:21:07.946 ] 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "subsystem": "bdev", 00:21:07.946 "config": [ 00:21:07.946 { 00:21:07.946 "method": "bdev_set_options", 00:21:07.946 "params": { 00:21:07.946 "bdev_io_pool_size": 65535, 00:21:07.946 "bdev_io_cache_size": 256, 00:21:07.946 "bdev_auto_examine": true, 00:21:07.946 "iobuf_small_cache_size": 128, 00:21:07.946 "iobuf_large_cache_size": 16 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_raid_set_options", 00:21:07.946 "params": { 00:21:07.946 "process_window_size_kb": 1024 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_iscsi_set_options", 00:21:07.946 "params": { 00:21:07.946 "timeout_sec": 30 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_nvme_set_options", 00:21:07.946 "params": { 00:21:07.946 "action_on_timeout": "none", 00:21:07.946 "timeout_us": 0, 00:21:07.946 "timeout_admin_us": 0, 00:21:07.946 "keep_alive_timeout_ms": 10000, 00:21:07.946 "arbitration_burst": 0, 00:21:07.946 "low_priority_weight": 0, 00:21:07.946 "medium_priority_weight": 0, 00:21:07.946 "high_priority_weight": 0, 00:21:07.946 "nvme_adminq_poll_period_us": 10000, 00:21:07.946 "nvme_ioq_poll_period_us": 0, 00:21:07.946 "io_queue_requests": 512, 00:21:07.946 "delay_cmd_submit": true, 00:21:07.946 "transport_retry_count": 4, 00:21:07.946 "bdev_retry_count": 3, 00:21:07.946 "transport_ack_timeout": 0, 00:21:07.946 "ctrlr_loss_timeout_sec": 0, 00:21:07.946 "reconnect_delay_sec": 0, 00:21:07.946 "fast_io_fail_timeout_sec": 0, 00:21:07.946 "disable_auto_failback": false, 00:21:07.946 "generate_uuids": false, 00:21:07.946 "transport_tos": 0, 00:21:07.946 "nvme_error_stat": false, 00:21:07.946 "rdma_srq_size": 0, 00:21:07.946 "io_path_stat": false, 00:21:07.946 "allow_accel_sequence": false, 00:21:07.946 "rdma_max_cq_size": 0, 00:21:07.946 "rdma_cm_event_timeout_ms": 0, 00:21:07.946 "dhchap_digests": [ 00:21:07.946 "sha256", 00:21:07.946 "sha384", 00:21:07.946 "sha512" 00:21:07.946 ], 00:21:07.946 "dhchap_dhgroups": [ 00:21:07.946 "null", 00:21:07.946 "ffdhe2048", 00:21:07.946 "ffdhe3072", 00:21:07.946 "ffdhe4096", 00:21:07.946 "ffdhe6144", 00:21:07.946 "ffdhe8192" 00:21:07.946 ] 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_nvme_attach_controller", 00:21:07.946 "params": { 00:21:07.946 "name": "nvme0", 00:21:07.946 "trtype": "TCP", 00:21:07.946 "adrfam": "IPv4", 00:21:07.946 "traddr": "10.0.0.2", 00:21:07.946 "trsvcid": "4420", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.946 "prchk_reftag": false, 00:21:07.946 "prchk_guard": false, 00:21:07.946 "ctrlr_loss_timeout_sec": 0, 00:21:07.946 "reconnect_delay_sec": 0, 00:21:07.946 "fast_io_fail_timeout_sec": 0, 00:21:07.946 "psk": "key0", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.946 "hdgst": false, 00:21:07.946 "ddgst": false 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_nvme_set_hotplug", 00:21:07.946 "params": { 00:21:07.946 "period_us": 100000, 00:21:07.946 "enable": false 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_enable_histogram", 00:21:07.946 "params": { 00:21:07.946 "name": "nvme0n1", 00:21:07.946 "enable": true 00:21:07.946 } 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "method": "bdev_wait_for_examine" 00:21:07.946 } 00:21:07.946 ] 00:21:07.946 }, 00:21:07.946 { 00:21:07.946 "subsystem": "nbd", 00:21:07.946 "config": [] 00:21:07.946 } 00:21:07.946 ] 00:21:07.946 }' 00:21:07.946 [2024-05-15 10:40:23.783115] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:07.947 [2024-05-15 10:40:23.783228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738989 ] 00:21:08.204 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.204 [2024-05-15 10:40:23.895288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.204 [2024-05-15 10:40:23.992050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.463 [2024-05-15 10:40:24.197683] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.723 10:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:08.723 10:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:08.723 10:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.723 10:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:08.982 10:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.982 10:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.982 Running I/O for 1 seconds... 00:21:09.917 00:21:09.917 Latency(us) 00:21:09.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.917 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:09.917 Verification LBA range: start 0x0 length 0x2000 00:21:09.917 nvme0n1 : 1.02 5977.73 23.35 0.00 0.00 21209.62 6726.06 23592.96 00:21:09.917 =================================================================================================================== 00:21:09.917 Total : 5977.73 23.35 0.00 0.00 21209.62 6726.06 23592.96 00:21:09.917 0 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:21:09.917 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:09.917 nvmf_trace.0 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2738989 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2738989 ']' 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2738989 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2738989 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2738989' 00:21:10.176 killing process with pid 2738989 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2738989 00:21:10.176 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.176 00:21:10.176 Latency(us) 00:21:10.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.176 =================================================================================================================== 00:21:10.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.176 10:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2738989 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.437 rmmod nvme_tcp 00:21:10.437 rmmod nvme_fabrics 00:21:10.437 rmmod nvme_keyring 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2738683 ']' 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2738683 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2738683 ']' 00:21:10.437 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2738683 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2738683 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2738683' 00:21:10.696 killing process with pid 2738683 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2738683 00:21:10.696 [2024-05-15 10:40:26.350077] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:10.696 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2738683 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.263 10:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.227 10:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.227 10:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TDEt3UY5ql /tmp/tmp.8urR9QNIML /tmp/tmp.OW7pJdQFBN 00:21:13.227 00:21:13.227 real 1m26.063s 00:21:13.227 user 2m16.363s 00:21:13.227 sys 0m22.057s 00:21:13.227 10:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:13.227 10:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.227 ************************************ 00:21:13.227 END TEST nvmf_tls 00:21:13.227 ************************************ 00:21:13.227 10:40:28 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.227 10:40:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:13.227 10:40:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:13.227 10:40:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.227 ************************************ 00:21:13.227 START TEST nvmf_fips 00:21:13.227 ************************************ 00:21:13.227 10:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.227 * Looking for test storage... 00:21:13.227 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:13.227 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:13.228 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:21:13.486 Error setting digest 00:21:13.486 00F2BD930F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:13.486 00F2BD930F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:13.486 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.487 10:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:20.054 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:20.054 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:20.054 Found net devices under 0000:27:00.0: cvl_0_0 00:21:20.054 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:20.055 Found net devices under 0000:27:00.1: cvl_0_1 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.055 10:40:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:21:20.055 00:21:20.055 --- 10.0.0.2 ping statistics --- 00:21:20.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.055 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:20.055 00:21:20.055 --- 10.0.0.1 ping statistics --- 00:21:20.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.055 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2743503 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2743503 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2743503 ']' 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.055 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:20.055 [2024-05-15 10:40:35.303461] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:20.055 [2024-05-15 10:40:35.303600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.055 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.055 [2024-05-15 10:40:35.443032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.055 [2024-05-15 10:40:35.547164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.055 [2024-05-15 10:40:35.547219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.055 [2024-05-15 10:40:35.547230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.055 [2024-05-15 10:40:35.547241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.055 [2024-05-15 10:40:35.547249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.055 [2024-05-15 10:40:35.547295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.313 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:20.313 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:21:20.313 10:40:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.313 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:20.313 10:40:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:20.313 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:20.313 [2024-05-15 10:40:36.160328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.313 [2024-05-15 10:40:36.176275] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:20.313 [2024-05-15 10:40:36.176350] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.313 [2024-05-15 10:40:36.176584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.572 [2024-05-15 10:40:36.223898] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:20.572 malloc0 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2743810 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2743810 /var/tmp/bdevperf.sock 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2743810 ']' 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 10:40:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.572 [2024-05-15 10:40:36.362919] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:20.572 [2024-05-15 10:40:36.363075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743810 ] 00:21:20.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.830 [2024-05-15 10:40:36.491649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.830 [2024-05-15 10:40:36.588905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.396 10:40:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:21.396 10:40:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:21:21.396 10:40:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:21.396 [2024-05-15 10:40:37.159792] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.396 [2024-05-15 10:40:37.159924] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:21.396 TLSTESTn1 00:21:21.396 10:40:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.655 Running I/O for 10 seconds... 00:21:31.630 00:21:31.630 Latency(us) 00:21:31.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.630 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:31.630 Verification LBA range: start 0x0 length 0x2000 00:21:31.630 TLSTESTn1 : 10.01 5746.28 22.45 0.00 0.00 22241.62 5484.33 29249.75 00:21:31.630 =================================================================================================================== 00:21:31.630 Total : 5746.28 22.45 0.00 0.00 22241.62 5484.33 29249.75 00:21:31.630 0 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:31.630 nvmf_trace.0 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2743810 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2743810 ']' 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2743810 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2743810 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2743810' 00:21:31.630 killing process with pid 2743810 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2743810 00:21:31.630 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.630 00:21:31.630 Latency(us) 00:21:31.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.630 =================================================================================================================== 00:21:31.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.630 [2024-05-15 10:40:47.488983] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:31.630 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2743810 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.197 rmmod nvme_tcp 00:21:32.197 rmmod nvme_fabrics 00:21:32.197 rmmod nvme_keyring 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2743503 ']' 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2743503 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2743503 ']' 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2743503 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2743503 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2743503' 00:21:32.197 killing process with pid 2743503 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2743503 00:21:32.197 [2024-05-15 10:40:47.990186] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:32.197 [2024-05-15 10:40:47.990246] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:32.197 10:40:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2743503 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.765 10:40:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.300 10:40:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.300 10:40:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:35.300 00:21:35.300 real 0m21.664s 00:21:35.300 user 0m24.809s 00:21:35.300 sys 0m7.511s 00:21:35.300 10:40:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:35.300 10:40:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.300 ************************************ 00:21:35.300 END TEST nvmf_fips 00:21:35.300 ************************************ 00:21:35.300 10:40:50 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:35.300 10:40:50 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy-fallback == phy ]] 00:21:35.300 10:40:50 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.300 10:40:50 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.300 10:40:50 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:21:35.300 10:40:50 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:35.300 10:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.300 ************************************ 00:21:35.300 START TEST nvmf_multicontroller 00:21:35.300 ************************************ 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:35.300 * Looking for test storage... 00:21:35.300 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.300 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:35.301 10:40:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:40.575 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:40.575 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:40.575 Found net devices under 0000:27:00.0: cvl_0_0 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:40.575 Found net devices under 0000:27:00.1: cvl_0_1 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.575 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.576 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.576 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.576 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.576 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.836 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.836 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.836 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.836 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.836 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.836 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:21:41.097 00:21:41.097 --- 10.0.0.2 ping statistics --- 00:21:41.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.097 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:41.097 00:21:41.097 --- 10.0.0.1 ping statistics --- 00:21:41.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.097 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2750065 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2750065 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2750065 ']' 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.097 10:40:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.097 [2024-05-15 10:40:56.848466] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:41.097 [2024-05-15 10:40:56.848589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.097 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.358 [2024-05-15 10:40:56.986825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.358 [2024-05-15 10:40:57.086716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.358 [2024-05-15 10:40:57.086767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.358 [2024-05-15 10:40:57.086778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.358 [2024-05-15 10:40:57.086789] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.358 [2024-05-15 10:40:57.086804] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.358 [2024-05-15 10:40:57.086963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.358 [2024-05-15 10:40:57.087058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.358 [2024-05-15 10:40:57.087076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 [2024-05-15 10:40:57.598295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 Malloc0 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 [2024-05-15 10:40:57.684137] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:41.927 [2024-05-15 10:40:57.684430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 [2024-05-15 10:40:57.692275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 Malloc1 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2750374 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2750374 /var/tmp/bdevperf.sock 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2750374 ']' 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 10:40:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.875 NVMe0n1 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.875 1 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.875 request: 00:21:42.875 { 00:21:42.875 "name": "NVMe0", 00:21:42.875 "trtype": "tcp", 00:21:42.875 "traddr": "10.0.0.2", 00:21:42.875 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:42.875 "hostaddr": "10.0.0.2", 00:21:42.875 "hostsvcid": "60000", 00:21:42.875 "adrfam": "ipv4", 00:21:42.875 "trsvcid": "4420", 00:21:42.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.875 "method": "bdev_nvme_attach_controller", 00:21:42.875 "req_id": 1 00:21:42.875 } 00:21:42.875 Got JSON-RPC error response 00:21:42.875 response: 00:21:42.875 { 00:21:42.875 "code": -114, 00:21:42.875 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:42.875 } 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.875 request: 00:21:42.875 { 00:21:42.875 "name": "NVMe0", 00:21:42.875 "trtype": "tcp", 00:21:42.875 "traddr": "10.0.0.2", 00:21:42.875 "hostaddr": "10.0.0.2", 00:21:42.875 "hostsvcid": "60000", 00:21:42.875 "adrfam": "ipv4", 00:21:42.875 "trsvcid": "4420", 00:21:42.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.875 "method": "bdev_nvme_attach_controller", 00:21:42.875 "req_id": 1 00:21:42.875 } 00:21:42.875 Got JSON-RPC error response 00:21:42.875 response: 00:21:42.875 { 00:21:42.875 "code": -114, 00:21:42.875 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:42.875 } 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.875 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.167 request: 00:21:43.167 { 00:21:43.167 "name": "NVMe0", 00:21:43.167 "trtype": "tcp", 00:21:43.167 "traddr": "10.0.0.2", 00:21:43.167 "hostaddr": "10.0.0.2", 00:21:43.167 "hostsvcid": "60000", 00:21:43.167 "adrfam": "ipv4", 00:21:43.167 "trsvcid": "4420", 00:21:43.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.167 "multipath": "disable", 00:21:43.167 "method": "bdev_nvme_attach_controller", 00:21:43.167 "req_id": 1 00:21:43.167 } 00:21:43.167 Got JSON-RPC error response 00:21:43.167 response: 00:21:43.167 { 00:21:43.167 "code": -114, 00:21:43.167 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:43.167 } 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.167 request: 00:21:43.167 { 00:21:43.167 "name": "NVMe0", 00:21:43.167 "trtype": "tcp", 00:21:43.167 "traddr": "10.0.0.2", 00:21:43.167 "hostaddr": "10.0.0.2", 00:21:43.167 "hostsvcid": "60000", 00:21:43.167 "adrfam": "ipv4", 00:21:43.167 "trsvcid": "4420", 00:21:43.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.167 "multipath": "failover", 00:21:43.167 "method": "bdev_nvme_attach_controller", 00:21:43.167 "req_id": 1 00:21:43.167 } 00:21:43.167 Got JSON-RPC error response 00:21:43.167 response: 00:21:43.167 { 00:21:43.167 "code": -114, 00:21:43.167 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:43.167 } 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.167 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.167 10:40:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.430 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:43.430 10:40:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.372 0 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2750374 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2750374 ']' 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2750374 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:44.372 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2750374 00:21:44.631 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:44.631 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:44.631 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2750374' 00:21:44.631 killing process with pid 2750374 00:21:44.631 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2750374 00:21:44.631 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2750374 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:21:44.890 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:44.890 [2024-05-15 10:40:57.845632] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:44.890 [2024-05-15 10:40:57.845784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750374 ] 00:21:44.890 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.890 [2024-05-15 10:40:57.978166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.890 [2024-05-15 10:40:58.069824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.890 [2024-05-15 10:40:59.072265] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name e6e0ea3b-061f-4166-bf1f-37a5468f9025 already exists 00:21:44.890 [2024-05-15 10:40:59.072313] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:e6e0ea3b-061f-4166-bf1f-37a5468f9025 alias for bdev NVMe1n1 00:21:44.890 [2024-05-15 10:40:59.072332] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:44.890 Running I/O for 1 seconds... 00:21:44.890 00:21:44.890 Latency(us) 00:21:44.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.890 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:44.890 NVMe0n1 : 1.00 24951.99 97.47 0.00 0.00 5122.10 2966.37 12624.30 00:21:44.890 =================================================================================================================== 00:21:44.890 Total : 24951.99 97.47 0.00 0.00 5122.10 2966.37 12624.30 00:21:44.890 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.890 00:21:44.890 Latency(us) 00:21:44.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.890 =================================================================================================================== 00:21:44.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.890 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.890 rmmod nvme_tcp 00:21:44.890 rmmod nvme_fabrics 00:21:44.890 rmmod nvme_keyring 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2750065 ']' 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2750065 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2750065 ']' 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2750065 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2750065 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2750065' 00:21:44.890 killing process with pid 2750065 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2750065 00:21:44.890 [2024-05-15 10:41:00.760450] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:44.890 10:41:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2750065 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.825 10:41:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.733 10:41:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.733 00:21:47.733 real 0m12.669s 00:21:47.733 user 0m16.896s 00:21:47.733 sys 0m5.390s 00:21:47.733 10:41:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:47.733 10:41:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.733 ************************************ 00:21:47.733 END TEST nvmf_multicontroller 00:21:47.733 ************************************ 00:21:47.733 10:41:03 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.733 10:41:03 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:47.733 10:41:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:47.733 10:41:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.733 ************************************ 00:21:47.733 START TEST nvmf_aer 00:21:47.733 ************************************ 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.733 * Looking for test storage... 00:21:47.733 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.733 10:41:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:53.007 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:53.007 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:53.007 Found net devices under 0000:27:00.0: cvl_0_0 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:53.007 Found net devices under 0000:27:00.1: cvl_0_1 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.007 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:21:53.008 00:21:53.008 --- 10.0.0.2 ping statistics --- 00:21:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.008 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:21:53.008 00:21:53.008 --- 10.0.0.1 ping statistics --- 00:21:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.008 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2754873 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2754873 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 2754873 ']' 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.008 10:41:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:53.267 [2024-05-15 10:41:08.936581] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:21:53.267 [2024-05-15 10:41:08.936691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.267 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.267 [2024-05-15 10:41:09.058103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.525 [2024-05-15 10:41:09.159140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.525 [2024-05-15 10:41:09.159179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.525 [2024-05-15 10:41:09.159190] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.525 [2024-05-15 10:41:09.159203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.525 [2024-05-15 10:41:09.159211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.525 [2024-05-15 10:41:09.159368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.525 [2024-05-15 10:41:09.159480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.525 [2024-05-15 10:41:09.159608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.525 [2024-05-15 10:41:09.159618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.783 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:53.783 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:21:53.783 10:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.783 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:53.783 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.042 10:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.042 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.042 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.042 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.042 [2024-05-15 10:41:09.680605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.042 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 Malloc0 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 [2024-05-15 10:41:09.750080] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:54.043 [2024-05-15 10:41:09.750331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 [ 00:21:54.043 { 00:21:54.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.043 "subtype": "Discovery", 00:21:54.043 "listen_addresses": [], 00:21:54.043 "allow_any_host": true, 00:21:54.043 "hosts": [] 00:21:54.043 }, 00:21:54.043 { 00:21:54.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.043 "subtype": "NVMe", 00:21:54.043 "listen_addresses": [ 00:21:54.043 { 00:21:54.043 "trtype": "TCP", 00:21:54.043 "adrfam": "IPv4", 00:21:54.043 "traddr": "10.0.0.2", 00:21:54.043 "trsvcid": "4420" 00:21:54.043 } 00:21:54.043 ], 00:21:54.043 "allow_any_host": true, 00:21:54.043 "hosts": [], 00:21:54.043 "serial_number": "SPDK00000000000001", 00:21:54.043 "model_number": "SPDK bdev Controller", 00:21:54.043 "max_namespaces": 2, 00:21:54.043 "min_cntlid": 1, 00:21:54.043 "max_cntlid": 65519, 00:21:54.043 "namespaces": [ 00:21:54.043 { 00:21:54.043 "nsid": 1, 00:21:54.043 "bdev_name": "Malloc0", 00:21:54.043 "name": "Malloc0", 00:21:54.043 "nguid": "F621275D4FCF4BD295937324209EF72B", 00:21:54.043 "uuid": "f621275d-4fcf-4bd2-9593-7324209ef72b" 00:21:54.043 } 00:21:54.043 ] 00:21:54.043 } 00:21:54.043 ] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2755113 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:54.043 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:21:54.043 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:54.303 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.303 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.303 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:21:54.303 10:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:54.303 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.303 10:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.303 Malloc1 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.303 [ 00:21:54.303 { 00:21:54.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.303 "subtype": "Discovery", 00:21:54.303 "listen_addresses": [], 00:21:54.303 "allow_any_host": true, 00:21:54.303 "hosts": [] 00:21:54.303 }, 00:21:54.303 { 00:21:54.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.303 "subtype": "NVMe", 00:21:54.303 "listen_addresses": [ 00:21:54.303 { 00:21:54.303 "trtype": "TCP", 00:21:54.303 "adrfam": "IPv4", 00:21:54.303 "traddr": "10.0.0.2", 00:21:54.303 "trsvcid": "4420" 00:21:54.303 } 00:21:54.303 ], 00:21:54.303 "allow_any_host": true, 00:21:54.303 "hosts": [], 00:21:54.303 "serial_number": "SPDK00000000000001", 00:21:54.303 "model_number": "SPDK bdev Controller", 00:21:54.303 "max_namespaces": 2, 00:21:54.303 "min_cntlid": 1, 00:21:54.303 "max_cntlid": 65519, 00:21:54.303 "namespaces": [ 00:21:54.303 { 00:21:54.303 "nsid": 1, 00:21:54.303 "bdev_name": "Malloc0", 00:21:54.303 "name": "Malloc0", 00:21:54.303 "nguid": "F621275D4FCF4BD295937324209EF72B", 00:21:54.303 "uuid": "f621275d-4fcf-4bd2-9593-7324209ef72b" 00:21:54.303 }, 00:21:54.303 { 00:21:54.303 "nsid": 2, 00:21:54.303 "bdev_name": "Malloc1", 00:21:54.303 "name": "Malloc1", 00:21:54.303 "nguid": "3CDABD8F9F23465E9F25F77F481BD1ED", 00:21:54.303 "uuid": "3cdabd8f-9f23-465e-9f25-f77f481bd1ed" 00:21:54.303 } 00:21:54.303 ] 00:21:54.303 } 00:21:54.303 ] 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2755113 00:21:54.303 Asynchronous Event Request test 00:21:54.303 Attaching to 10.0.0.2 00:21:54.303 Attached to 10.0.0.2 00:21:54.303 Registering asynchronous event callbacks... 00:21:54.303 Starting namespace attribute notice tests for all controllers... 00:21:54.303 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:54.303 aer_cb - Changed Namespace 00:21:54.303 Cleaning up... 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.303 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.562 rmmod nvme_tcp 00:21:54.562 rmmod nvme_fabrics 00:21:54.562 rmmod nvme_keyring 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2754873 ']' 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2754873 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 2754873 ']' 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 2754873 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2754873 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2754873' 00:21:54.562 killing process with pid 2754873 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 2754873 00:21:54.562 [2024-05-15 10:41:10.354148] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:54.562 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 2754873 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.127 10:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.034 10:41:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.034 00:21:57.034 real 0m9.414s 00:21:57.034 user 0m7.657s 00:21:57.034 sys 0m4.358s 00:21:57.034 10:41:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:57.034 10:41:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.034 ************************************ 00:21:57.034 END TEST nvmf_aer 00:21:57.034 ************************************ 00:21:57.034 10:41:12 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:57.034 10:41:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:57.034 10:41:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:57.034 10:41:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.293 ************************************ 00:21:57.293 START TEST nvmf_async_init 00:21:57.293 ************************************ 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:57.293 * Looking for test storage... 00:21:57.293 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.293 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:57.294 10:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cb584a14850142bca7666f4544d8af31 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.294 10:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:02.565 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:02.565 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:02.565 Found net devices under 0000:27:00.0: cvl_0_0 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:02.565 Found net devices under 0000:27:00.1: cvl_0_1 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.565 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:22:02.565 00:22:02.565 --- 10.0.0.2 ping statistics --- 00:22:02.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.566 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:22:02.566 00:22:02.566 --- 10.0.0.1 ping statistics --- 00:22:02.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.566 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2759076 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2759076 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 2759076 ']' 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 10:41:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:02.825 [2024-05-15 10:41:18.498336] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:22:02.825 [2024-05-15 10:41:18.498437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.825 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.825 [2024-05-15 10:41:18.616791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.084 [2024-05-15 10:41:18.716604] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.084 [2024-05-15 10:41:18.716640] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.084 [2024-05-15 10:41:18.716650] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.084 [2024-05-15 10:41:18.716660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.085 [2024-05-15 10:41:18.716667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.085 [2024-05-15 10:41:18.716701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.344 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:03.344 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:22:03.344 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.344 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:03.344 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 [2024-05-15 10:41:19.231281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 null0 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cb584a14850142bca7666f4544d8af31 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 [2024-05-15 10:41:19.275254] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:03.604 [2024-05-15 10:41:19.275518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.604 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 nvme0n1 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 [ 00:22:03.863 { 00:22:03.863 "name": "nvme0n1", 00:22:03.863 "aliases": [ 00:22:03.863 "cb584a14-8501-42bc-a766-6f4544d8af31" 00:22:03.863 ], 00:22:03.863 "product_name": "NVMe disk", 00:22:03.863 "block_size": 512, 00:22:03.863 "num_blocks": 2097152, 00:22:03.863 "uuid": "cb584a14-8501-42bc-a766-6f4544d8af31", 00:22:03.863 "assigned_rate_limits": { 00:22:03.863 "rw_ios_per_sec": 0, 00:22:03.863 "rw_mbytes_per_sec": 0, 00:22:03.863 "r_mbytes_per_sec": 0, 00:22:03.863 "w_mbytes_per_sec": 0 00:22:03.863 }, 00:22:03.863 "claimed": false, 00:22:03.863 "zoned": false, 00:22:03.863 "supported_io_types": { 00:22:03.863 "read": true, 00:22:03.863 "write": true, 00:22:03.863 "unmap": false, 00:22:03.863 "write_zeroes": true, 00:22:03.863 "flush": true, 00:22:03.863 "reset": true, 00:22:03.863 "compare": true, 00:22:03.863 "compare_and_write": true, 00:22:03.863 "abort": true, 00:22:03.863 "nvme_admin": true, 00:22:03.863 "nvme_io": true 00:22:03.863 }, 00:22:03.863 "memory_domains": [ 00:22:03.863 { 00:22:03.863 "dma_device_id": "system", 00:22:03.863 "dma_device_type": 1 00:22:03.863 } 00:22:03.863 ], 00:22:03.863 "driver_specific": { 00:22:03.863 "nvme": [ 00:22:03.863 { 00:22:03.863 "trid": { 00:22:03.863 "trtype": "TCP", 00:22:03.863 "adrfam": "IPv4", 00:22:03.863 "traddr": "10.0.0.2", 00:22:03.863 "trsvcid": "4420", 00:22:03.863 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:03.863 }, 00:22:03.863 "ctrlr_data": { 00:22:03.863 "cntlid": 1, 00:22:03.863 "vendor_id": "0x8086", 00:22:03.863 "model_number": "SPDK bdev Controller", 00:22:03.863 "serial_number": "00000000000000000000", 00:22:03.863 "firmware_revision": "24.05", 00:22:03.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.863 "oacs": { 00:22:03.863 "security": 0, 00:22:03.863 "format": 0, 00:22:03.863 "firmware": 0, 00:22:03.863 "ns_manage": 0 00:22:03.863 }, 00:22:03.863 "multi_ctrlr": true, 00:22:03.863 "ana_reporting": false 00:22:03.863 }, 00:22:03.863 "vs": { 00:22:03.863 "nvme_version": "1.3" 00:22:03.863 }, 00:22:03.863 "ns_data": { 00:22:03.863 "id": 1, 00:22:03.863 "can_share": true 00:22:03.863 } 00:22:03.863 } 00:22:03.863 ], 00:22:03.863 "mp_policy": "active_passive" 00:22:03.863 } 00:22:03.863 } 00:22:03.863 ] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 [2024-05-15 10:41:19.529794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:03.863 [2024-05-15 10:41:19.529880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:22:03.863 [2024-05-15 10:41:19.662165] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 [ 00:22:03.863 { 00:22:03.863 "name": "nvme0n1", 00:22:03.863 "aliases": [ 00:22:03.863 "cb584a14-8501-42bc-a766-6f4544d8af31" 00:22:03.863 ], 00:22:03.863 "product_name": "NVMe disk", 00:22:03.863 "block_size": 512, 00:22:03.863 "num_blocks": 2097152, 00:22:03.863 "uuid": "cb584a14-8501-42bc-a766-6f4544d8af31", 00:22:03.863 "assigned_rate_limits": { 00:22:03.863 "rw_ios_per_sec": 0, 00:22:03.863 "rw_mbytes_per_sec": 0, 00:22:03.863 "r_mbytes_per_sec": 0, 00:22:03.863 "w_mbytes_per_sec": 0 00:22:03.863 }, 00:22:03.863 "claimed": false, 00:22:03.863 "zoned": false, 00:22:03.863 "supported_io_types": { 00:22:03.863 "read": true, 00:22:03.863 "write": true, 00:22:03.863 "unmap": false, 00:22:03.863 "write_zeroes": true, 00:22:03.863 "flush": true, 00:22:03.863 "reset": true, 00:22:03.863 "compare": true, 00:22:03.863 "compare_and_write": true, 00:22:03.863 "abort": true, 00:22:03.863 "nvme_admin": true, 00:22:03.863 "nvme_io": true 00:22:03.863 }, 00:22:03.863 "memory_domains": [ 00:22:03.863 { 00:22:03.863 "dma_device_id": "system", 00:22:03.863 "dma_device_type": 1 00:22:03.863 } 00:22:03.863 ], 00:22:03.863 "driver_specific": { 00:22:03.863 "nvme": [ 00:22:03.863 { 00:22:03.863 "trid": { 00:22:03.863 "trtype": "TCP", 00:22:03.863 "adrfam": "IPv4", 00:22:03.863 "traddr": "10.0.0.2", 00:22:03.863 "trsvcid": "4420", 00:22:03.863 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:03.863 }, 00:22:03.863 "ctrlr_data": { 00:22:03.863 "cntlid": 2, 00:22:03.863 "vendor_id": "0x8086", 00:22:03.863 "model_number": "SPDK bdev Controller", 00:22:03.863 "serial_number": "00000000000000000000", 00:22:03.863 "firmware_revision": "24.05", 00:22:03.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.863 "oacs": { 00:22:03.863 "security": 0, 00:22:03.863 "format": 0, 00:22:03.863 "firmware": 0, 00:22:03.863 "ns_manage": 0 00:22:03.863 }, 00:22:03.863 "multi_ctrlr": true, 00:22:03.863 "ana_reporting": false 00:22:03.863 }, 00:22:03.863 "vs": { 00:22:03.863 "nvme_version": "1.3" 00:22:03.863 }, 00:22:03.863 "ns_data": { 00:22:03.863 "id": 1, 00:22:03.863 "can_share": true 00:22:03.863 } 00:22:03.863 } 00:22:03.863 ], 00:22:03.863 "mp_policy": "active_passive" 00:22:03.863 } 00:22:03.863 } 00:22:03.863 ] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.EqKbWL3MCv 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.EqKbWL3MCv 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 [2024-05-15 10:41:19.713911] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.863 [2024-05-15 10:41:19.714055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EqKbWL3MCv 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 [2024-05-15 10:41:19.721912] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EqKbWL3MCv 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.863 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.863 [2024-05-15 10:41:19.729902] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.863 [2024-05-15 10:41:19.729967] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.121 nvme0n1 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.121 [ 00:22:04.121 { 00:22:04.121 "name": "nvme0n1", 00:22:04.121 "aliases": [ 00:22:04.121 "cb584a14-8501-42bc-a766-6f4544d8af31" 00:22:04.121 ], 00:22:04.121 "product_name": "NVMe disk", 00:22:04.121 "block_size": 512, 00:22:04.121 "num_blocks": 2097152, 00:22:04.121 "uuid": "cb584a14-8501-42bc-a766-6f4544d8af31", 00:22:04.121 "assigned_rate_limits": { 00:22:04.121 "rw_ios_per_sec": 0, 00:22:04.121 "rw_mbytes_per_sec": 0, 00:22:04.121 "r_mbytes_per_sec": 0, 00:22:04.121 "w_mbytes_per_sec": 0 00:22:04.121 }, 00:22:04.121 "claimed": false, 00:22:04.121 "zoned": false, 00:22:04.121 "supported_io_types": { 00:22:04.121 "read": true, 00:22:04.121 "write": true, 00:22:04.121 "unmap": false, 00:22:04.121 "write_zeroes": true, 00:22:04.121 "flush": true, 00:22:04.121 "reset": true, 00:22:04.121 "compare": true, 00:22:04.121 "compare_and_write": true, 00:22:04.121 "abort": true, 00:22:04.121 "nvme_admin": true, 00:22:04.121 "nvme_io": true 00:22:04.121 }, 00:22:04.121 "memory_domains": [ 00:22:04.121 { 00:22:04.121 "dma_device_id": "system", 00:22:04.121 "dma_device_type": 1 00:22:04.121 } 00:22:04.121 ], 00:22:04.121 "driver_specific": { 00:22:04.121 "nvme": [ 00:22:04.121 { 00:22:04.121 "trid": { 00:22:04.121 "trtype": "TCP", 00:22:04.121 "adrfam": "IPv4", 00:22:04.121 "traddr": "10.0.0.2", 00:22:04.121 "trsvcid": "4421", 00:22:04.121 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.121 }, 00:22:04.121 "ctrlr_data": { 00:22:04.121 "cntlid": 3, 00:22:04.121 "vendor_id": "0x8086", 00:22:04.121 "model_number": "SPDK bdev Controller", 00:22:04.121 "serial_number": "00000000000000000000", 00:22:04.121 "firmware_revision": "24.05", 00:22:04.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.121 "oacs": { 00:22:04.121 "security": 0, 00:22:04.121 "format": 0, 00:22:04.121 "firmware": 0, 00:22:04.121 "ns_manage": 0 00:22:04.121 }, 00:22:04.121 "multi_ctrlr": true, 00:22:04.121 "ana_reporting": false 00:22:04.121 }, 00:22:04.121 "vs": { 00:22:04.121 "nvme_version": "1.3" 00:22:04.121 }, 00:22:04.121 "ns_data": { 00:22:04.121 "id": 1, 00:22:04.121 "can_share": true 00:22:04.121 } 00:22:04.121 } 00:22:04.121 ], 00:22:04.121 "mp_policy": "active_passive" 00:22:04.121 } 00:22:04.121 } 00:22:04.121 ] 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.EqKbWL3MCv 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.121 rmmod nvme_tcp 00:22:04.121 rmmod nvme_fabrics 00:22:04.121 rmmod nvme_keyring 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2759076 ']' 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2759076 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 2759076 ']' 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 2759076 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2759076 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2759076' 00:22:04.121 killing process with pid 2759076 00:22:04.121 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 2759076 00:22:04.121 [2024-05-15 10:41:19.907385] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.121 [2024-05-15 10:41:19.907419] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:04.122 [2024-05-15 10:41:19.907428] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:04.122 10:41:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 2759076 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.688 10:41:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.593 10:41:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.593 00:22:06.593 real 0m9.533s 00:22:06.593 user 0m3.516s 00:22:06.593 sys 0m4.311s 00:22:06.593 10:41:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:06.593 10:41:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.593 ************************************ 00:22:06.593 END TEST nvmf_async_init 00:22:06.593 ************************************ 00:22:06.853 10:41:22 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:06.853 10:41:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:06.853 10:41:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:06.853 10:41:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.853 ************************************ 00:22:06.853 START TEST dma 00:22:06.853 ************************************ 00:22:06.853 10:41:22 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:06.853 * Looking for test storage... 00:22:06.853 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:06.853 10:41:22 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.853 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:06.853 10:41:22 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.853 10:41:22 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.853 10:41:22 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.854 10:41:22 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.854 10:41:22 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.854 10:41:22 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.854 10:41:22 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:06.854 10:41:22 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.854 10:41:22 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.854 10:41:22 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:06.854 10:41:22 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:06.854 00:22:06.854 real 0m0.090s 00:22:06.854 user 0m0.037s 00:22:06.854 sys 0m0.057s 00:22:06.854 10:41:22 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:06.854 10:41:22 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:06.854 ************************************ 00:22:06.854 END TEST dma 00:22:06.854 ************************************ 00:22:06.854 10:41:22 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:06.854 10:41:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:06.854 10:41:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:06.854 10:41:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.854 ************************************ 00:22:06.854 START TEST nvmf_identify 00:22:06.854 ************************************ 00:22:06.854 10:41:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:06.854 * Looking for test storage... 00:22:07.118 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.118 10:41:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.119 10:41:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:12.427 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:12.427 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:12.427 Found net devices under 0000:27:00.0: cvl_0_0 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:12.427 Found net devices under 0000:27:00.1: cvl_0_1 00:22:12.427 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:12.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:12.428 00:22:12.428 --- 10.0.0.2 ping statistics --- 00:22:12.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.428 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:22:12.428 00:22:12.428 --- 10.0.0.1 ping statistics --- 00:22:12.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.428 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2763476 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2763476 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 2763476 ']' 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:12.428 10:41:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.428 [2024-05-15 10:41:28.034688] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:22:12.428 [2024-05-15 10:41:28.034788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.428 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.428 [2024-05-15 10:41:28.153952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.428 [2024-05-15 10:41:28.253423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.428 [2024-05-15 10:41:28.253459] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.428 [2024-05-15 10:41:28.253468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.428 [2024-05-15 10:41:28.253476] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.428 [2024-05-15 10:41:28.253485] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.428 [2024-05-15 10:41:28.253631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.428 [2024-05-15 10:41:28.253730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.428 [2024-05-15 10:41:28.253830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.428 [2024-05-15 10:41:28.253840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 [2024-05-15 10:41:28.736125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 Malloc0 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 [2024-05-15 10:41:28.837516] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:12.994 [2024-05-15 10:41:28.837788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.994 [ 00:22:12.994 { 00:22:12.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.994 "subtype": "Discovery", 00:22:12.994 "listen_addresses": [ 00:22:12.994 { 00:22:12.994 "trtype": "TCP", 00:22:12.994 "adrfam": "IPv4", 00:22:12.994 "traddr": "10.0.0.2", 00:22:12.994 "trsvcid": "4420" 00:22:12.994 } 00:22:12.994 ], 00:22:12.994 "allow_any_host": true, 00:22:12.994 "hosts": [] 00:22:12.994 }, 00:22:12.994 { 00:22:12.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.994 "subtype": "NVMe", 00:22:12.994 "listen_addresses": [ 00:22:12.994 { 00:22:12.994 "trtype": "TCP", 00:22:12.994 "adrfam": "IPv4", 00:22:12.994 "traddr": "10.0.0.2", 00:22:12.994 "trsvcid": "4420" 00:22:12.994 } 00:22:12.994 ], 00:22:12.994 "allow_any_host": true, 00:22:12.994 "hosts": [], 00:22:12.994 "serial_number": "SPDK00000000000001", 00:22:12.994 "model_number": "SPDK bdev Controller", 00:22:12.994 "max_namespaces": 32, 00:22:12.994 "min_cntlid": 1, 00:22:12.994 "max_cntlid": 65519, 00:22:12.994 "namespaces": [ 00:22:12.994 { 00:22:12.994 "nsid": 1, 00:22:12.994 "bdev_name": "Malloc0", 00:22:12.994 "name": "Malloc0", 00:22:12.994 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:12.994 "eui64": "ABCDEF0123456789", 00:22:12.994 "uuid": "52376b26-ab54-4bf2-8392-a28e048fe094" 00:22:12.994 } 00:22:12.994 ] 00:22:12.994 } 00:22:12.994 ] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.994 10:41:28 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:13.257 [2024-05-15 10:41:28.900311] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:22:13.257 [2024-05-15 10:41:28.900401] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2763627 ] 00:22:13.257 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.257 [2024-05-15 10:41:28.955344] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:13.257 [2024-05-15 10:41:28.955448] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:13.257 [2024-05-15 10:41:28.955459] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:13.257 [2024-05-15 10:41:28.955486] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:13.257 [2024-05-15 10:41:28.955501] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:13.257 [2024-05-15 10:41:28.955825] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:13.257 [2024-05-15 10:41:28.955865] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000024980 0 00:22:13.257 [2024-05-15 10:41:28.970057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:13.257 [2024-05-15 10:41:28.970080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:13.257 [2024-05-15 10:41:28.970089] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:13.257 [2024-05-15 10:41:28.970095] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:13.257 [2024-05-15 10:41:28.970152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.970164] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.970171] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.257 [2024-05-15 10:41:28.970202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:13.257 [2024-05-15 10:41:28.970226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.257 [2024-05-15 10:41:28.978063] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.257 [2024-05-15 10:41:28.978080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.257 [2024-05-15 10:41:28.978087] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.257 [2024-05-15 10:41:28.978118] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:13.257 [2024-05-15 10:41:28.978134] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:13.257 [2024-05-15 10:41:28.978143] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:13.257 [2024-05-15 10:41:28.978166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.257 [2024-05-15 10:41:28.978196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.257 [2024-05-15 10:41:28.978216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.257 [2024-05-15 10:41:28.978324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.257 [2024-05-15 10:41:28.978333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.257 [2024-05-15 10:41:28.978344] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978350] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.257 [2024-05-15 10:41:28.978363] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:13.257 [2024-05-15 10:41:28.978374] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:13.257 [2024-05-15 10:41:28.978383] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978394] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.257 [2024-05-15 10:41:28.978405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.257 [2024-05-15 10:41:28.978416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.257 [2024-05-15 10:41:28.978485] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.257 [2024-05-15 10:41:28.978493] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.257 [2024-05-15 10:41:28.978497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978502] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.257 [2024-05-15 10:41:28.978509] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:13.257 [2024-05-15 10:41:28.978520] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:13.257 [2024-05-15 10:41:28.978527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978538] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.257 [2024-05-15 10:41:28.978543] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:28.978553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.258 [2024-05-15 10:41:28.978565] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.258 [2024-05-15 10:41:28.978638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.258 [2024-05-15 10:41:28.978645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.258 [2024-05-15 10:41:28.978651] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.978655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.258 [2024-05-15 10:41:28.978663] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:13.258 [2024-05-15 10:41:28.978673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.978679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.978684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:28.978693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.258 [2024-05-15 10:41:28.978708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.258 [2024-05-15 10:41:28.978784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.258 [2024-05-15 10:41:28.978791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.258 [2024-05-15 10:41:28.978795] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.978799] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.258 [2024-05-15 10:41:28.978806] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:13.258 [2024-05-15 10:41:28.978813] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:13.258 [2024-05-15 10:41:28.978825] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:13.258 [2024-05-15 10:41:28.978933] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:13.258 [2024-05-15 10:41:28.978939] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:13.258 [2024-05-15 10:41:28.978952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.978958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.978963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:28.978975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.258 [2024-05-15 10:41:28.978992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.258 [2024-05-15 10:41:28.979075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.258 [2024-05-15 10:41:28.979082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.258 [2024-05-15 10:41:28.979086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.258 [2024-05-15 10:41:28.979097] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:13.258 [2024-05-15 10:41:28.979107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:28.979126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.258 [2024-05-15 10:41:28.979138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.258 [2024-05-15 10:41:28.979211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.258 [2024-05-15 10:41:28.979217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.258 [2024-05-15 10:41:28.979221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979225] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.258 [2024-05-15 10:41:28.979234] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:13.258 [2024-05-15 10:41:28.979240] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:13.258 [2024-05-15 10:41:28.979249] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:13.258 [2024-05-15 10:41:28.979257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:13.258 [2024-05-15 10:41:28.979273] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979279] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:28.979290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.258 [2024-05-15 10:41:28.979301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.258 [2024-05-15 10:41:28.979408] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.258 [2024-05-15 10:41:28.979415] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.258 [2024-05-15 10:41:28.979420] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979426] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=0 00:22:13.258 [2024-05-15 10:41:28.979433] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.258 [2024-05-15 10:41:28.979439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979454] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:28.979461] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.258 [2024-05-15 10:41:29.020232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.258 [2024-05-15 10:41:29.020237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.258 [2024-05-15 10:41:29.020259] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:13.258 [2024-05-15 10:41:29.020271] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:13.258 [2024-05-15 10:41:29.020278] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:13.258 [2024-05-15 10:41:29.020286] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:13.258 [2024-05-15 10:41:29.020292] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:13.258 [2024-05-15 10:41:29.020301] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:13.258 [2024-05-15 10:41:29.020313] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:13.258 [2024-05-15 10:41:29.020326] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020342] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:29.020361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:13.258 [2024-05-15 10:41:29.020376] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.258 [2024-05-15 10:41:29.020457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.258 [2024-05-15 10:41:29.020464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.258 [2024-05-15 10:41:29.020468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.258 [2024-05-15 10:41:29.020483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:29.020506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.258 [2024-05-15 10:41:29.020514] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020518] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:29.020530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.258 [2024-05-15 10:41:29.020536] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020545] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:29.020552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.258 [2024-05-15 10:41:29.020558] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020563] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:29.020574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.258 [2024-05-15 10:41:29.020580] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:13.258 [2024-05-15 10:41:29.020591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:13.258 [2024-05-15 10:41:29.020598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.258 [2024-05-15 10:41:29.020604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.258 [2024-05-15 10:41:29.020615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.258 [2024-05-15 10:41:29.020628] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.259 [2024-05-15 10:41:29.020633] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:22:13.259 [2024-05-15 10:41:29.020638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:22:13.259 [2024-05-15 10:41:29.020643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.259 [2024-05-15 10:41:29.020650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.259 [2024-05-15 10:41:29.020756] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.259 [2024-05-15 10:41:29.020763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.259 [2024-05-15 10:41:29.020767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.259 [2024-05-15 10:41:29.020779] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:13.259 [2024-05-15 10:41:29.020786] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:13.259 [2024-05-15 10:41:29.020800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020806] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.259 [2024-05-15 10:41:29.020820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.259 [2024-05-15 10:41:29.020831] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.259 [2024-05-15 10:41:29.020921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.259 [2024-05-15 10:41:29.020928] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.259 [2024-05-15 10:41:29.020936] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020941] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:13.259 [2024-05-15 10:41:29.020949] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.259 [2024-05-15 10:41:29.020954] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020963] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020969] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.259 [2024-05-15 10:41:29.020985] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.259 [2024-05-15 10:41:29.020989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.020994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.259 [2024-05-15 10:41:29.021011] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:13.259 [2024-05-15 10:41:29.021057] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.259 [2024-05-15 10:41:29.021074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.259 [2024-05-15 10:41:29.021082] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021087] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:13.259 [2024-05-15 10:41:29.021101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.259 [2024-05-15 10:41:29.021114] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.259 [2024-05-15 10:41:29.021120] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:13.259 [2024-05-15 10:41:29.021288] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.259 [2024-05-15 10:41:29.021295] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.259 [2024-05-15 10:41:29.021300] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021305] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=1024, cccid=4 00:22:13.259 [2024-05-15 10:41:29.021313] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=1024 00:22:13.259 [2024-05-15 10:41:29.021319] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021327] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021332] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021339] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.259 [2024-05-15 10:41:29.021347] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.259 [2024-05-15 10:41:29.021351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.021356] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:13.259 [2024-05-15 10:41:29.066057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.259 [2024-05-15 10:41:29.066071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.259 [2024-05-15 10:41:29.066076] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.259 [2024-05-15 10:41:29.066106] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066112] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.259 [2024-05-15 10:41:29.066122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.259 [2024-05-15 10:41:29.066140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.259 [2024-05-15 10:41:29.066254] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.259 [2024-05-15 10:41:29.066261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.259 [2024-05-15 10:41:29.066265] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066270] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=3072, cccid=4 00:22:13.259 [2024-05-15 10:41:29.066276] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=3072 00:22:13.259 [2024-05-15 10:41:29.066280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066289] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066294] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066302] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.259 [2024-05-15 10:41:29.066308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.259 [2024-05-15 10:41:29.066312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.259 [2024-05-15 10:41:29.066327] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066336] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.259 [2024-05-15 10:41:29.066345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.259 [2024-05-15 10:41:29.066358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.259 [2024-05-15 10:41:29.066449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.259 [2024-05-15 10:41:29.066455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.259 [2024-05-15 10:41:29.066459] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066463] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=8, cccid=4 00:22:13.259 [2024-05-15 10:41:29.066468] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=8 00:22:13.259 [2024-05-15 10:41:29.066473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066480] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.066484] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.107246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.259 [2024-05-15 10:41:29.107262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.259 [2024-05-15 10:41:29.107266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.259 [2024-05-15 10:41:29.107271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.259 ===================================================== 00:22:13.259 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:13.259 ===================================================== 00:22:13.259 Controller Capabilities/Features 00:22:13.259 ================================ 00:22:13.259 Vendor ID: 0000 00:22:13.259 Subsystem Vendor ID: 0000 00:22:13.259 Serial Number: .................... 00:22:13.259 Model Number: ........................................ 00:22:13.259 Firmware Version: 24.05 00:22:13.259 Recommended Arb Burst: 0 00:22:13.259 IEEE OUI Identifier: 00 00 00 00:22:13.259 Multi-path I/O 00:22:13.259 May have multiple subsystem ports: No 00:22:13.259 May have multiple controllers: No 00:22:13.259 Associated with SR-IOV VF: No 00:22:13.259 Max Data Transfer Size: 131072 00:22:13.259 Max Number of Namespaces: 0 00:22:13.259 Max Number of I/O Queues: 1024 00:22:13.259 NVMe Specification Version (VS): 1.3 00:22:13.259 NVMe Specification Version (Identify): 1.3 00:22:13.259 Maximum Queue Entries: 128 00:22:13.259 Contiguous Queues Required: Yes 00:22:13.259 Arbitration Mechanisms Supported 00:22:13.259 Weighted Round Robin: Not Supported 00:22:13.259 Vendor Specific: Not Supported 00:22:13.259 Reset Timeout: 15000 ms 00:22:13.259 Doorbell Stride: 4 bytes 00:22:13.259 NVM Subsystem Reset: Not Supported 00:22:13.259 Command Sets Supported 00:22:13.259 NVM Command Set: Supported 00:22:13.259 Boot Partition: Not Supported 00:22:13.259 Memory Page Size Minimum: 4096 bytes 00:22:13.259 Memory Page Size Maximum: 4096 bytes 00:22:13.259 Persistent Memory Region: Not Supported 00:22:13.259 Optional Asynchronous Events Supported 00:22:13.259 Namespace Attribute Notices: Not Supported 00:22:13.259 Firmware Activation Notices: Not Supported 00:22:13.259 ANA Change Notices: Not Supported 00:22:13.259 PLE Aggregate Log Change Notices: Not Supported 00:22:13.260 LBA Status Info Alert Notices: Not Supported 00:22:13.260 EGE Aggregate Log Change Notices: Not Supported 00:22:13.260 Normal NVM Subsystem Shutdown event: Not Supported 00:22:13.260 Zone Descriptor Change Notices: Not Supported 00:22:13.260 Discovery Log Change Notices: Supported 00:22:13.260 Controller Attributes 00:22:13.260 128-bit Host Identifier: Not Supported 00:22:13.260 Non-Operational Permissive Mode: Not Supported 00:22:13.260 NVM Sets: Not Supported 00:22:13.260 Read Recovery Levels: Not Supported 00:22:13.260 Endurance Groups: Not Supported 00:22:13.260 Predictable Latency Mode: Not Supported 00:22:13.260 Traffic Based Keep ALive: Not Supported 00:22:13.260 Namespace Granularity: Not Supported 00:22:13.260 SQ Associations: Not Supported 00:22:13.260 UUID List: Not Supported 00:22:13.260 Multi-Domain Subsystem: Not Supported 00:22:13.260 Fixed Capacity Management: Not Supported 00:22:13.260 Variable Capacity Management: Not Supported 00:22:13.260 Delete Endurance Group: Not Supported 00:22:13.260 Delete NVM Set: Not Supported 00:22:13.260 Extended LBA Formats Supported: Not Supported 00:22:13.260 Flexible Data Placement Supported: Not Supported 00:22:13.260 00:22:13.260 Controller Memory Buffer Support 00:22:13.260 ================================ 00:22:13.260 Supported: No 00:22:13.260 00:22:13.260 Persistent Memory Region Support 00:22:13.260 ================================ 00:22:13.260 Supported: No 00:22:13.260 00:22:13.260 Admin Command Set Attributes 00:22:13.260 ============================ 00:22:13.260 Security Send/Receive: Not Supported 00:22:13.260 Format NVM: Not Supported 00:22:13.260 Firmware Activate/Download: Not Supported 00:22:13.260 Namespace Management: Not Supported 00:22:13.260 Device Self-Test: Not Supported 00:22:13.260 Directives: Not Supported 00:22:13.260 NVMe-MI: Not Supported 00:22:13.260 Virtualization Management: Not Supported 00:22:13.260 Doorbell Buffer Config: Not Supported 00:22:13.260 Get LBA Status Capability: Not Supported 00:22:13.260 Command & Feature Lockdown Capability: Not Supported 00:22:13.260 Abort Command Limit: 1 00:22:13.260 Async Event Request Limit: 4 00:22:13.260 Number of Firmware Slots: N/A 00:22:13.260 Firmware Slot 1 Read-Only: N/A 00:22:13.260 Firmware Activation Without Reset: N/A 00:22:13.260 Multiple Update Detection Support: N/A 00:22:13.260 Firmware Update Granularity: No Information Provided 00:22:13.260 Per-Namespace SMART Log: No 00:22:13.260 Asymmetric Namespace Access Log Page: Not Supported 00:22:13.260 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:13.260 Command Effects Log Page: Not Supported 00:22:13.260 Get Log Page Extended Data: Supported 00:22:13.260 Telemetry Log Pages: Not Supported 00:22:13.260 Persistent Event Log Pages: Not Supported 00:22:13.260 Supported Log Pages Log Page: May Support 00:22:13.260 Commands Supported & Effects Log Page: Not Supported 00:22:13.260 Feature Identifiers & Effects Log Page:May Support 00:22:13.260 NVMe-MI Commands & Effects Log Page: May Support 00:22:13.260 Data Area 4 for Telemetry Log: Not Supported 00:22:13.260 Error Log Page Entries Supported: 128 00:22:13.260 Keep Alive: Not Supported 00:22:13.260 00:22:13.260 NVM Command Set Attributes 00:22:13.260 ========================== 00:22:13.260 Submission Queue Entry Size 00:22:13.260 Max: 1 00:22:13.260 Min: 1 00:22:13.260 Completion Queue Entry Size 00:22:13.260 Max: 1 00:22:13.260 Min: 1 00:22:13.260 Number of Namespaces: 0 00:22:13.260 Compare Command: Not Supported 00:22:13.260 Write Uncorrectable Command: Not Supported 00:22:13.260 Dataset Management Command: Not Supported 00:22:13.260 Write Zeroes Command: Not Supported 00:22:13.260 Set Features Save Field: Not Supported 00:22:13.260 Reservations: Not Supported 00:22:13.260 Timestamp: Not Supported 00:22:13.260 Copy: Not Supported 00:22:13.260 Volatile Write Cache: Not Present 00:22:13.260 Atomic Write Unit (Normal): 1 00:22:13.260 Atomic Write Unit (PFail): 1 00:22:13.260 Atomic Compare & Write Unit: 1 00:22:13.260 Fused Compare & Write: Supported 00:22:13.260 Scatter-Gather List 00:22:13.260 SGL Command Set: Supported 00:22:13.260 SGL Keyed: Supported 00:22:13.260 SGL Bit Bucket Descriptor: Not Supported 00:22:13.260 SGL Metadata Pointer: Not Supported 00:22:13.260 Oversized SGL: Not Supported 00:22:13.260 SGL Metadata Address: Not Supported 00:22:13.260 SGL Offset: Supported 00:22:13.260 Transport SGL Data Block: Not Supported 00:22:13.260 Replay Protected Memory Block: Not Supported 00:22:13.260 00:22:13.260 Firmware Slot Information 00:22:13.260 ========================= 00:22:13.260 Active slot: 0 00:22:13.260 00:22:13.260 00:22:13.260 Error Log 00:22:13.260 ========= 00:22:13.260 00:22:13.260 Active Namespaces 00:22:13.260 ================= 00:22:13.260 Discovery Log Page 00:22:13.260 ================== 00:22:13.260 Generation Counter: 2 00:22:13.260 Number of Records: 2 00:22:13.260 Record Format: 0 00:22:13.260 00:22:13.260 Discovery Log Entry 0 00:22:13.260 ---------------------- 00:22:13.260 Transport Type: 3 (TCP) 00:22:13.260 Address Family: 1 (IPv4) 00:22:13.260 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:13.260 Entry Flags: 00:22:13.260 Duplicate Returned Information: 1 00:22:13.260 Explicit Persistent Connection Support for Discovery: 1 00:22:13.260 Transport Requirements: 00:22:13.260 Secure Channel: Not Required 00:22:13.260 Port ID: 0 (0x0000) 00:22:13.260 Controller ID: 65535 (0xffff) 00:22:13.260 Admin Max SQ Size: 128 00:22:13.260 Transport Service Identifier: 4420 00:22:13.260 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:13.260 Transport Address: 10.0.0.2 00:22:13.260 Discovery Log Entry 1 00:22:13.260 ---------------------- 00:22:13.260 Transport Type: 3 (TCP) 00:22:13.260 Address Family: 1 (IPv4) 00:22:13.260 Subsystem Type: 2 (NVM Subsystem) 00:22:13.260 Entry Flags: 00:22:13.260 Duplicate Returned Information: 0 00:22:13.260 Explicit Persistent Connection Support for Discovery: 0 00:22:13.260 Transport Requirements: 00:22:13.260 Secure Channel: Not Required 00:22:13.260 Port ID: 0 (0x0000) 00:22:13.260 Controller ID: 65535 (0xffff) 00:22:13.260 Admin Max SQ Size: 128 00:22:13.260 Transport Service Identifier: 4420 00:22:13.260 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:13.260 Transport Address: 10.0.0.2 [2024-05-15 10:41:29.107412] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:13.260 [2024-05-15 10:41:29.107432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.260 [2024-05-15 10:41:29.107441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.260 [2024-05-15 10:41:29.107448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.260 [2024-05-15 10:41:29.107455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.260 [2024-05-15 10:41:29.107466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.260 [2024-05-15 10:41:29.107489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.260 [2024-05-15 10:41:29.107509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.260 [2024-05-15 10:41:29.107585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.260 [2024-05-15 10:41:29.107593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.260 [2024-05-15 10:41:29.107598] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107603] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.260 [2024-05-15 10:41:29.107615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107621] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107627] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.260 [2024-05-15 10:41:29.107638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.260 [2024-05-15 10:41:29.107655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.260 [2024-05-15 10:41:29.107737] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.260 [2024-05-15 10:41:29.107744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.260 [2024-05-15 10:41:29.107748] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107752] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.260 [2024-05-15 10:41:29.107760] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:13.260 [2024-05-15 10:41:29.107767] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:13.260 [2024-05-15 10:41:29.107778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.260 [2024-05-15 10:41:29.107783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.107788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.107798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.107809] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.107878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.107885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.107888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.107893] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.107903] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.107907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.107912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.107919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.107930] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108000] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108010] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108015] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108029] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108054] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108148] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108179] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108278] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108304] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108384] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108390] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108394] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108434] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108509] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108515] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108519] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108532] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108537] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108559] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108637] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108655] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108781] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.108883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.108889] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.108893] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108897] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.108906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.108915] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.108923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.108933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.109008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.109014] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.109018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.109023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.261 [2024-05-15 10:41:29.109032] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.109037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.261 [2024-05-15 10:41:29.109041] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.261 [2024-05-15 10:41:29.109051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.261 [2024-05-15 10:41:29.109061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.261 [2024-05-15 10:41:29.109134] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.261 [2024-05-15 10:41:29.109140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.261 [2024-05-15 10:41:29.109144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109158] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109162] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109166] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109184] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109272] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109281] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109285] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109308] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109377] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109387] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109400] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109409] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109523] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109528] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109532] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109550] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109627] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109631] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109641] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109649] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109669] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109748] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109752] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109761] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109765] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109769] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109860] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109868] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.109878] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.109894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.109905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.109983] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.109989] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.109993] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.109997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.110006] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.110011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.110015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.110023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.110033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.114054] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.114062] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.114066] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.114071] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.114081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.114086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.114090] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.262 [2024-05-15 10:41:29.114099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.262 [2024-05-15 10:41:29.114110] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.262 [2024-05-15 10:41:29.114190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.262 [2024-05-15 10:41:29.114198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.262 [2024-05-15 10:41:29.114203] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.262 [2024-05-15 10:41:29.114207] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.262 [2024-05-15 10:41:29.114216] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:13.522 00:22:13.522 10:41:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:13.522 [2024-05-15 10:41:29.200505] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:22:13.522 [2024-05-15 10:41:29.200606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2763634 ] 00:22:13.522 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.522 [2024-05-15 10:41:29.256882] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:13.522 [2024-05-15 10:41:29.256964] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:13.522 [2024-05-15 10:41:29.256973] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:13.522 [2024-05-15 10:41:29.256994] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:13.522 [2024-05-15 10:41:29.257008] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:13.522 [2024-05-15 10:41:29.261093] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:13.522 [2024-05-15 10:41:29.261129] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000024980 0 00:22:13.522 [2024-05-15 10:41:29.277062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:13.522 [2024-05-15 10:41:29.277084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:13.522 [2024-05-15 10:41:29.277092] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:13.522 [2024-05-15 10:41:29.277099] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:13.522 [2024-05-15 10:41:29.277146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.522 [2024-05-15 10:41:29.277159] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.522 [2024-05-15 10:41:29.277167] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.522 [2024-05-15 10:41:29.277194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:13.522 [2024-05-15 10:41:29.277221] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.522 [2024-05-15 10:41:29.285065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.522 [2024-05-15 10:41:29.285080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.522 [2024-05-15 10:41:29.285085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.522 [2024-05-15 10:41:29.285091] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.522 [2024-05-15 10:41:29.285105] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:13.522 [2024-05-15 10:41:29.285120] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:13.522 [2024-05-15 10:41:29.285127] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:13.522 [2024-05-15 10:41:29.285145] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.522 [2024-05-15 10:41:29.285152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.522 [2024-05-15 10:41:29.285159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.285175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.285194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.285425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.285433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.285443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.285458] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:13.523 [2024-05-15 10:41:29.285467] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:13.523 [2024-05-15 10:41:29.285475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285489] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.285500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.285512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.285606] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.285614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.285617] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.285629] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:13.523 [2024-05-15 10:41:29.285638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:13.523 [2024-05-15 10:41:29.285647] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.285668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.285681] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.285782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.285788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.285792] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.285805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:13.523 [2024-05-15 10:41:29.285815] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285820] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.285836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.285848] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.285939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.285945] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.285949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.285953] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.285961] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:13.523 [2024-05-15 10:41:29.285968] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:13.523 [2024-05-15 10:41:29.285976] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:13.523 [2024-05-15 10:41:29.286083] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:13.523 [2024-05-15 10:41:29.286091] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:13.523 [2024-05-15 10:41:29.286103] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286113] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.286122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.286134] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.286229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.286239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.286244] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286248] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.286254] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:13.523 [2024-05-15 10:41:29.286264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286269] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.286285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.286296] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.286390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.286399] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.286403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.286414] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:13.523 [2024-05-15 10:41:29.286420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:13.523 [2024-05-15 10:41:29.286429] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:13.523 [2024-05-15 10:41:29.286440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:13.523 [2024-05-15 10:41:29.286453] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286459] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.286468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.523 [2024-05-15 10:41:29.286480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.286637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.523 [2024-05-15 10:41:29.286643] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.523 [2024-05-15 10:41:29.286648] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286653] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=0 00:22:13.523 [2024-05-15 10:41:29.286662] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.523 [2024-05-15 10:41:29.286668] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286746] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.286752] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.328374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.328379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328384] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.328399] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:13.523 [2024-05-15 10:41:29.328406] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:13.523 [2024-05-15 10:41:29.328412] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:13.523 [2024-05-15 10:41:29.328418] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:13.523 [2024-05-15 10:41:29.328424] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:13.523 [2024-05-15 10:41:29.328431] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:13.523 [2024-05-15 10:41:29.328445] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:13.523 [2024-05-15 10:41:29.328456] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.523 [2024-05-15 10:41:29.328479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:13.523 [2024-05-15 10:41:29.328494] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.523 [2024-05-15 10:41:29.328607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.523 [2024-05-15 10:41:29.328614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.523 [2024-05-15 10:41:29.328618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328624] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:13.523 [2024-05-15 10:41:29.328634] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.523 [2024-05-15 10:41:29.328645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.328653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.524 [2024-05-15 10:41:29.328662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328671] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.328678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.524 [2024-05-15 10:41:29.328684] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328689] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.328703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.524 [2024-05-15 10:41:29.328710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328714] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328718] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.328725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.524 [2024-05-15 10:41:29.328731] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.328741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.328749] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328754] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.328763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.524 [2024-05-15 10:41:29.328776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:13.524 [2024-05-15 10:41:29.328781] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:22:13.524 [2024-05-15 10:41:29.328786] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:22:13.524 [2024-05-15 10:41:29.328791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.524 [2024-05-15 10:41:29.328796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.524 [2024-05-15 10:41:29.328924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.524 [2024-05-15 10:41:29.328930] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.524 [2024-05-15 10:41:29.328934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328939] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.524 [2024-05-15 10:41:29.328945] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:13.524 [2024-05-15 10:41:29.328952] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.328962] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.328972] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.328985] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.328997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.329006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:13.524 [2024-05-15 10:41:29.329017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.524 [2024-05-15 10:41:29.333068] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.524 [2024-05-15 10:41:29.333078] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.524 [2024-05-15 10:41:29.333082] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.333087] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.524 [2024-05-15 10:41:29.333138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.333151] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.333162] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.333167] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.333178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.524 [2024-05-15 10:41:29.333190] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.524 [2024-05-15 10:41:29.333310] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.524 [2024-05-15 10:41:29.333316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.524 [2024-05-15 10:41:29.333320] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.333326] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:13.524 [2024-05-15 10:41:29.333331] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.524 [2024-05-15 10:41:29.333336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.333430] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.333434] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.375361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.524 [2024-05-15 10:41:29.375376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.524 [2024-05-15 10:41:29.375381] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.375386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.524 [2024-05-15 10:41:29.375408] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:13.524 [2024-05-15 10:41:29.375432] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.375443] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:13.524 [2024-05-15 10:41:29.375456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.375462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.524 [2024-05-15 10:41:29.375472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.524 [2024-05-15 10:41:29.375485] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.524 [2024-05-15 10:41:29.375618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.524 [2024-05-15 10:41:29.375628] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.524 [2024-05-15 10:41:29.375632] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.375637] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:13.524 [2024-05-15 10:41:29.375642] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.524 [2024-05-15 10:41:29.375647] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.375735] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.524 [2024-05-15 10:41:29.375739] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.421054] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.787 [2024-05-15 10:41:29.421070] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.787 [2024-05-15 10:41:29.421075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.421080] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.787 [2024-05-15 10:41:29.421103] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.421115] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.421126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.421133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.787 [2024-05-15 10:41:29.421145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.787 [2024-05-15 10:41:29.421160] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.787 [2024-05-15 10:41:29.421287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.787 [2024-05-15 10:41:29.421294] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.787 [2024-05-15 10:41:29.421297] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.421302] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:13.787 [2024-05-15 10:41:29.421307] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.787 [2024-05-15 10:41:29.421312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.421399] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.421403] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.787 [2024-05-15 10:41:29.463359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.787 [2024-05-15 10:41:29.463364] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.787 [2024-05-15 10:41:29.463388] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.463397] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.463408] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.463417] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.463424] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.463432] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:13.787 [2024-05-15 10:41:29.463438] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:13.787 [2024-05-15 10:41:29.463445] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:13.787 [2024-05-15 10:41:29.463474] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.787 [2024-05-15 10:41:29.463490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.787 [2024-05-15 10:41:29.463500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:13.787 [2024-05-15 10:41:29.463519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.787 [2024-05-15 10:41:29.463533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.787 [2024-05-15 10:41:29.463540] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:13.787 [2024-05-15 10:41:29.463665] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.787 [2024-05-15 10:41:29.463672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.787 [2024-05-15 10:41:29.463677] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463683] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.787 [2024-05-15 10:41:29.463691] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.787 [2024-05-15 10:41:29.463701] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.787 [2024-05-15 10:41:29.463705] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:13.787 [2024-05-15 10:41:29.463718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:13.787 [2024-05-15 10:41:29.463731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.787 [2024-05-15 10:41:29.463741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:13.787 [2024-05-15 10:41:29.463839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.787 [2024-05-15 10:41:29.463845] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.787 [2024-05-15 10:41:29.463849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463855] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:13.787 [2024-05-15 10:41:29.463864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.787 [2024-05-15 10:41:29.463868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:13.788 [2024-05-15 10:41:29.463876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.788 [2024-05-15 10:41:29.463885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:13.788 [2024-05-15 10:41:29.463978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.788 [2024-05-15 10:41:29.463984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.788 [2024-05-15 10:41:29.463988] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.463992] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:13.788 [2024-05-15 10:41:29.464001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464005] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:13.788 [2024-05-15 10:41:29.464015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.788 [2024-05-15 10:41:29.464025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:13.788 [2024-05-15 10:41:29.464125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.788 [2024-05-15 10:41:29.464131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.788 [2024-05-15 10:41:29.464135] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464140] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:13.788 [2024-05-15 10:41:29.464158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:13.788 [2024-05-15 10:41:29.464174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.788 [2024-05-15 10:41:29.464183] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464188] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:13.788 [2024-05-15 10:41:29.464197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.788 [2024-05-15 10:41:29.464205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464211] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000024980) 00:22:13.788 [2024-05-15 10:41:29.464220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.788 [2024-05-15 10:41:29.464229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000024980) 00:22:13.788 [2024-05-15 10:41:29.464243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.788 [2024-05-15 10:41:29.464255] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:13.788 [2024-05-15 10:41:29.464262] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:13.788 [2024-05-15 10:41:29.464267] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:22:13.788 [2024-05-15 10:41:29.464280] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:22:13.788 [2024-05-15 10:41:29.464445] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.788 [2024-05-15 10:41:29.464453] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.788 [2024-05-15 10:41:29.464457] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464463] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=8192, cccid=5 00:22:13.788 [2024-05-15 10:41:29.464469] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x615000024980): expected_datao=0, payload_size=8192 00:22:13.788 [2024-05-15 10:41:29.464474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464604] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464609] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.788 [2024-05-15 10:41:29.464623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.788 [2024-05-15 10:41:29.464627] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464631] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=512, cccid=4 00:22:13.788 [2024-05-15 10:41:29.464636] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=512 00:22:13.788 [2024-05-15 10:41:29.464641] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464650] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464653] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464663] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.788 [2024-05-15 10:41:29.464669] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.788 [2024-05-15 10:41:29.464673] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464677] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=512, cccid=6 00:22:13.788 [2024-05-15 10:41:29.464683] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x615000024980): expected_datao=0, payload_size=512 00:22:13.788 [2024-05-15 10:41:29.464687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464694] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464698] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464703] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.788 [2024-05-15 10:41:29.464709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.788 [2024-05-15 10:41:29.464713] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464718] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=7 00:22:13.788 [2024-05-15 10:41:29.464723] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:13.788 [2024-05-15 10:41:29.464727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464735] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464738] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.788 [2024-05-15 10:41:29.464752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.788 [2024-05-15 10:41:29.464756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:13.788 [2024-05-15 10:41:29.464780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.788 [2024-05-15 10:41:29.464786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.788 [2024-05-15 10:41:29.464790] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464795] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:13.788 [2024-05-15 10:41:29.464805] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.788 [2024-05-15 10:41:29.464811] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.788 [2024-05-15 10:41:29.464815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x615000024980 00:22:13.788 [2024-05-15 10:41:29.464830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.788 [2024-05-15 10:41:29.464836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.788 [2024-05-15 10:41:29.464840] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.788 [2024-05-15 10:41:29.464845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x615000024980 00:22:13.788 ===================================================== 00:22:13.788 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.788 ===================================================== 00:22:13.788 Controller Capabilities/Features 00:22:13.788 ================================ 00:22:13.788 Vendor ID: 8086 00:22:13.788 Subsystem Vendor ID: 8086 00:22:13.788 Serial Number: SPDK00000000000001 00:22:13.788 Model Number: SPDK bdev Controller 00:22:13.788 Firmware Version: 24.05 00:22:13.788 Recommended Arb Burst: 6 00:22:13.788 IEEE OUI Identifier: e4 d2 5c 00:22:13.788 Multi-path I/O 00:22:13.788 May have multiple subsystem ports: Yes 00:22:13.788 May have multiple controllers: Yes 00:22:13.788 Associated with SR-IOV VF: No 00:22:13.788 Max Data Transfer Size: 131072 00:22:13.788 Max Number of Namespaces: 32 00:22:13.788 Max Number of I/O Queues: 127 00:22:13.788 NVMe Specification Version (VS): 1.3 00:22:13.788 NVMe Specification Version (Identify): 1.3 00:22:13.788 Maximum Queue Entries: 128 00:22:13.788 Contiguous Queues Required: Yes 00:22:13.788 Arbitration Mechanisms Supported 00:22:13.788 Weighted Round Robin: Not Supported 00:22:13.788 Vendor Specific: Not Supported 00:22:13.788 Reset Timeout: 15000 ms 00:22:13.788 Doorbell Stride: 4 bytes 00:22:13.788 NVM Subsystem Reset: Not Supported 00:22:13.788 Command Sets Supported 00:22:13.788 NVM Command Set: Supported 00:22:13.788 Boot Partition: Not Supported 00:22:13.788 Memory Page Size Minimum: 4096 bytes 00:22:13.788 Memory Page Size Maximum: 4096 bytes 00:22:13.788 Persistent Memory Region: Not Supported 00:22:13.788 Optional Asynchronous Events Supported 00:22:13.788 Namespace Attribute Notices: Supported 00:22:13.788 Firmware Activation Notices: Not Supported 00:22:13.788 ANA Change Notices: Not Supported 00:22:13.788 PLE Aggregate Log Change Notices: Not Supported 00:22:13.788 LBA Status Info Alert Notices: Not Supported 00:22:13.788 EGE Aggregate Log Change Notices: Not Supported 00:22:13.788 Normal NVM Subsystem Shutdown event: Not Supported 00:22:13.788 Zone Descriptor Change Notices: Not Supported 00:22:13.788 Discovery Log Change Notices: Not Supported 00:22:13.788 Controller Attributes 00:22:13.788 128-bit Host Identifier: Supported 00:22:13.788 Non-Operational Permissive Mode: Not Supported 00:22:13.788 NVM Sets: Not Supported 00:22:13.789 Read Recovery Levels: Not Supported 00:22:13.789 Endurance Groups: Not Supported 00:22:13.789 Predictable Latency Mode: Not Supported 00:22:13.789 Traffic Based Keep ALive: Not Supported 00:22:13.789 Namespace Granularity: Not Supported 00:22:13.789 SQ Associations: Not Supported 00:22:13.789 UUID List: Not Supported 00:22:13.789 Multi-Domain Subsystem: Not Supported 00:22:13.789 Fixed Capacity Management: Not Supported 00:22:13.789 Variable Capacity Management: Not Supported 00:22:13.789 Delete Endurance Group: Not Supported 00:22:13.789 Delete NVM Set: Not Supported 00:22:13.789 Extended LBA Formats Supported: Not Supported 00:22:13.789 Flexible Data Placement Supported: Not Supported 00:22:13.789 00:22:13.789 Controller Memory Buffer Support 00:22:13.789 ================================ 00:22:13.789 Supported: No 00:22:13.789 00:22:13.789 Persistent Memory Region Support 00:22:13.789 ================================ 00:22:13.789 Supported: No 00:22:13.789 00:22:13.789 Admin Command Set Attributes 00:22:13.789 ============================ 00:22:13.789 Security Send/Receive: Not Supported 00:22:13.789 Format NVM: Not Supported 00:22:13.789 Firmware Activate/Download: Not Supported 00:22:13.789 Namespace Management: Not Supported 00:22:13.789 Device Self-Test: Not Supported 00:22:13.789 Directives: Not Supported 00:22:13.789 NVMe-MI: Not Supported 00:22:13.789 Virtualization Management: Not Supported 00:22:13.789 Doorbell Buffer Config: Not Supported 00:22:13.789 Get LBA Status Capability: Not Supported 00:22:13.789 Command & Feature Lockdown Capability: Not Supported 00:22:13.789 Abort Command Limit: 4 00:22:13.789 Async Event Request Limit: 4 00:22:13.789 Number of Firmware Slots: N/A 00:22:13.789 Firmware Slot 1 Read-Only: N/A 00:22:13.789 Firmware Activation Without Reset: N/A 00:22:13.789 Multiple Update Detection Support: N/A 00:22:13.789 Firmware Update Granularity: No Information Provided 00:22:13.789 Per-Namespace SMART Log: No 00:22:13.789 Asymmetric Namespace Access Log Page: Not Supported 00:22:13.789 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:13.789 Command Effects Log Page: Supported 00:22:13.789 Get Log Page Extended Data: Supported 00:22:13.789 Telemetry Log Pages: Not Supported 00:22:13.789 Persistent Event Log Pages: Not Supported 00:22:13.789 Supported Log Pages Log Page: May Support 00:22:13.789 Commands Supported & Effects Log Page: Not Supported 00:22:13.789 Feature Identifiers & Effects Log Page:May Support 00:22:13.789 NVMe-MI Commands & Effects Log Page: May Support 00:22:13.789 Data Area 4 for Telemetry Log: Not Supported 00:22:13.789 Error Log Page Entries Supported: 128 00:22:13.789 Keep Alive: Supported 00:22:13.789 Keep Alive Granularity: 10000 ms 00:22:13.789 00:22:13.789 NVM Command Set Attributes 00:22:13.789 ========================== 00:22:13.789 Submission Queue Entry Size 00:22:13.789 Max: 64 00:22:13.789 Min: 64 00:22:13.789 Completion Queue Entry Size 00:22:13.789 Max: 16 00:22:13.789 Min: 16 00:22:13.789 Number of Namespaces: 32 00:22:13.789 Compare Command: Supported 00:22:13.789 Write Uncorrectable Command: Not Supported 00:22:13.789 Dataset Management Command: Supported 00:22:13.789 Write Zeroes Command: Supported 00:22:13.789 Set Features Save Field: Not Supported 00:22:13.789 Reservations: Supported 00:22:13.789 Timestamp: Not Supported 00:22:13.789 Copy: Supported 00:22:13.789 Volatile Write Cache: Present 00:22:13.789 Atomic Write Unit (Normal): 1 00:22:13.789 Atomic Write Unit (PFail): 1 00:22:13.789 Atomic Compare & Write Unit: 1 00:22:13.789 Fused Compare & Write: Supported 00:22:13.789 Scatter-Gather List 00:22:13.789 SGL Command Set: Supported 00:22:13.789 SGL Keyed: Supported 00:22:13.789 SGL Bit Bucket Descriptor: Not Supported 00:22:13.789 SGL Metadata Pointer: Not Supported 00:22:13.789 Oversized SGL: Not Supported 00:22:13.789 SGL Metadata Address: Not Supported 00:22:13.789 SGL Offset: Supported 00:22:13.789 Transport SGL Data Block: Not Supported 00:22:13.789 Replay Protected Memory Block: Not Supported 00:22:13.789 00:22:13.789 Firmware Slot Information 00:22:13.789 ========================= 00:22:13.789 Active slot: 1 00:22:13.789 Slot 1 Firmware Revision: 24.05 00:22:13.789 00:22:13.789 00:22:13.789 Commands Supported and Effects 00:22:13.789 ============================== 00:22:13.789 Admin Commands 00:22:13.789 -------------- 00:22:13.789 Get Log Page (02h): Supported 00:22:13.789 Identify (06h): Supported 00:22:13.789 Abort (08h): Supported 00:22:13.789 Set Features (09h): Supported 00:22:13.789 Get Features (0Ah): Supported 00:22:13.789 Asynchronous Event Request (0Ch): Supported 00:22:13.789 Keep Alive (18h): Supported 00:22:13.789 I/O Commands 00:22:13.789 ------------ 00:22:13.789 Flush (00h): Supported LBA-Change 00:22:13.789 Write (01h): Supported LBA-Change 00:22:13.789 Read (02h): Supported 00:22:13.789 Compare (05h): Supported 00:22:13.789 Write Zeroes (08h): Supported LBA-Change 00:22:13.789 Dataset Management (09h): Supported LBA-Change 00:22:13.789 Copy (19h): Supported LBA-Change 00:22:13.789 Unknown (79h): Supported LBA-Change 00:22:13.789 Unknown (7Ah): Supported 00:22:13.789 00:22:13.789 Error Log 00:22:13.789 ========= 00:22:13.789 00:22:13.789 Arbitration 00:22:13.789 =========== 00:22:13.789 Arbitration Burst: 1 00:22:13.789 00:22:13.789 Power Management 00:22:13.789 ================ 00:22:13.789 Number of Power States: 1 00:22:13.789 Current Power State: Power State #0 00:22:13.789 Power State #0: 00:22:13.789 Max Power: 0.00 W 00:22:13.789 Non-Operational State: Operational 00:22:13.789 Entry Latency: Not Reported 00:22:13.789 Exit Latency: Not Reported 00:22:13.789 Relative Read Throughput: 0 00:22:13.789 Relative Read Latency: 0 00:22:13.789 Relative Write Throughput: 0 00:22:13.789 Relative Write Latency: 0 00:22:13.789 Idle Power: Not Reported 00:22:13.789 Active Power: Not Reported 00:22:13.789 Non-Operational Permissive Mode: Not Supported 00:22:13.789 00:22:13.789 Health Information 00:22:13.789 ================== 00:22:13.789 Critical Warnings: 00:22:13.789 Available Spare Space: OK 00:22:13.789 Temperature: OK 00:22:13.789 Device Reliability: OK 00:22:13.789 Read Only: No 00:22:13.789 Volatile Memory Backup: OK 00:22:13.789 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:13.789 Temperature Threshold: [2024-05-15 10:41:29.464977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.464985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000024980) 00:22:13.789 [2024-05-15 10:41:29.464994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.789 [2024-05-15 10:41:29.465005] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:22:13.789 [2024-05-15 10:41:29.469058] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.789 [2024-05-15 10:41:29.469069] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.789 [2024-05-15 10:41:29.469073] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.469079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x615000024980 00:22:13.789 [2024-05-15 10:41:29.469119] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:13.789 [2024-05-15 10:41:29.469133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.789 [2024-05-15 10:41:29.469142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.789 [2024-05-15 10:41:29.469149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.789 [2024-05-15 10:41:29.469157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.789 [2024-05-15 10:41:29.469166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.469172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.469179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.789 [2024-05-15 10:41:29.469191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.789 [2024-05-15 10:41:29.469205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.789 [2024-05-15 10:41:29.469295] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.789 [2024-05-15 10:41:29.469303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.789 [2024-05-15 10:41:29.469308] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.469314] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.789 [2024-05-15 10:41:29.469323] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.469330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.789 [2024-05-15 10:41:29.469337] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.789 [2024-05-15 10:41:29.469346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.789 [2024-05-15 10:41:29.469359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.789 [2024-05-15 10:41:29.469460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.789 [2024-05-15 10:41:29.469466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.789 [2024-05-15 10:41:29.469471] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.469482] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:13.790 [2024-05-15 10:41:29.469488] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:13.790 [2024-05-15 10:41:29.469498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469509] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.469518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.469531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.469619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.469625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.469629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469633] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.469643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469652] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.469660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.469670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.469760] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.469766] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.469770] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469774] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.469784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.469804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.469814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.469899] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.469905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.469909] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.469924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469929] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.469933] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.469941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.469951] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470050] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470058] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470076] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470094] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470191] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470195] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470213] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470222] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470238] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470381] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470470] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470484] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470489] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470595] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470601] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470610] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470628] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470645] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470737] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470746] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470754] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470781] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.470866] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.470872] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.470876] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470880] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.470889] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.470898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.470909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.470918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.471006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.790 [2024-05-15 10:41:29.471012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.790 [2024-05-15 10:41:29.471016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.471022] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.790 [2024-05-15 10:41:29.471031] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.471036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.790 [2024-05-15 10:41:29.471040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.790 [2024-05-15 10:41:29.471055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.790 [2024-05-15 10:41:29.471065] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.790 [2024-05-15 10:41:29.471160] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.471166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.471172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471177] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.471186] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471190] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.471202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.471213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.471303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.471310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.471314] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471318] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.471327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471331] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.471343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.471353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.471440] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.471446] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.471450] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.471463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471472] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.471480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.471489] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.471584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.471590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.471594] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471599] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.471609] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471613] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471617] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.471625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.471635] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.471729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.471735] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.471739] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471743] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.471753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.471769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.471778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.471872] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.471878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.471882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471886] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.471896] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.471904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.471912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.471921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.472012] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.472018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.472022] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472026] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.472036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.472059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.472069] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.472154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.472160] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.472164] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472170] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.472179] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472183] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.472200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.472209] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.472300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.472306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.472310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472314] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.472323] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.472340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.791 [2024-05-15 10:41:29.472350] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.791 [2024-05-15 10:41:29.472439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.791 [2024-05-15 10:41:29.472445] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.791 [2024-05-15 10:41:29.472449] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472453] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.791 [2024-05-15 10:41:29.472463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.791 [2024-05-15 10:41:29.472471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.791 [2024-05-15 10:41:29.472479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.792 [2024-05-15 10:41:29.472489] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.792 [2024-05-15 10:41:29.472581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.792 [2024-05-15 10:41:29.472587] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.792 [2024-05-15 10:41:29.472591] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472596] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.792 [2024-05-15 10:41:29.472605] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472609] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.792 [2024-05-15 10:41:29.472621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.792 [2024-05-15 10:41:29.472632] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.792 [2024-05-15 10:41:29.472724] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.792 [2024-05-15 10:41:29.472731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.792 [2024-05-15 10:41:29.472734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472740] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.792 [2024-05-15 10:41:29.472749] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472758] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.792 [2024-05-15 10:41:29.472765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.792 [2024-05-15 10:41:29.472775] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.792 [2024-05-15 10:41:29.472872] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.792 [2024-05-15 10:41:29.472878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.792 [2024-05-15 10:41:29.472882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472886] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.792 [2024-05-15 10:41:29.472895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.472904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.792 [2024-05-15 10:41:29.472912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.792 [2024-05-15 10:41:29.472922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.792 [2024-05-15 10:41:29.473010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.792 [2024-05-15 10:41:29.473017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.792 [2024-05-15 10:41:29.473020] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.473025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.792 [2024-05-15 10:41:29.473034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.473038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.473043] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:13.792 [2024-05-15 10:41:29.477062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.792 [2024-05-15 10:41:29.477073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:13.792 [2024-05-15 10:41:29.477156] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.792 [2024-05-15 10:41:29.477162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.792 [2024-05-15 10:41:29.477166] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.792 [2024-05-15 10:41:29.477170] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:13.792 [2024-05-15 10:41:29.477178] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:13.792 0 Kelvin (-273 Celsius) 00:22:13.792 Available Spare: 0% 00:22:13.792 Available Spare Threshold: 0% 00:22:13.792 Life Percentage Used: 0% 00:22:13.792 Data Units Read: 0 00:22:13.792 Data Units Written: 0 00:22:13.792 Host Read Commands: 0 00:22:13.792 Host Write Commands: 0 00:22:13.792 Controller Busy Time: 0 minutes 00:22:13.792 Power Cycles: 0 00:22:13.792 Power On Hours: 0 hours 00:22:13.792 Unsafe Shutdowns: 0 00:22:13.792 Unrecoverable Media Errors: 0 00:22:13.792 Lifetime Error Log Entries: 0 00:22:13.792 Warning Temperature Time: 0 minutes 00:22:13.792 Critical Temperature Time: 0 minutes 00:22:13.792 00:22:13.792 Number of Queues 00:22:13.792 ================ 00:22:13.792 Number of I/O Submission Queues: 127 00:22:13.792 Number of I/O Completion Queues: 127 00:22:13.792 00:22:13.792 Active Namespaces 00:22:13.792 ================= 00:22:13.792 Namespace ID:1 00:22:13.792 Error Recovery Timeout: Unlimited 00:22:13.792 Command Set Identifier: NVM (00h) 00:22:13.792 Deallocate: Supported 00:22:13.792 Deallocated/Unwritten Error: Not Supported 00:22:13.792 Deallocated Read Value: Unknown 00:22:13.792 Deallocate in Write Zeroes: Not Supported 00:22:13.792 Deallocated Guard Field: 0xFFFF 00:22:13.792 Flush: Supported 00:22:13.792 Reservation: Supported 00:22:13.792 Namespace Sharing Capabilities: Multiple Controllers 00:22:13.792 Size (in LBAs): 131072 (0GiB) 00:22:13.792 Capacity (in LBAs): 131072 (0GiB) 00:22:13.792 Utilization (in LBAs): 131072 (0GiB) 00:22:13.792 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:13.792 EUI64: ABCDEF0123456789 00:22:13.792 UUID: 52376b26-ab54-4bf2-8392-a28e048fe094 00:22:13.792 Thin Provisioning: Not Supported 00:22:13.792 Per-NS Atomic Units: Yes 00:22:13.792 Atomic Boundary Size (Normal): 0 00:22:13.792 Atomic Boundary Size (PFail): 0 00:22:13.792 Atomic Boundary Offset: 0 00:22:13.792 Maximum Single Source Range Length: 65535 00:22:13.792 Maximum Copy Length: 65535 00:22:13.792 Maximum Source Range Count: 1 00:22:13.792 NGUID/EUI64 Never Reused: No 00:22:13.792 Namespace Write Protected: No 00:22:13.792 Number of LBA Formats: 1 00:22:13.792 Current LBA Format: LBA Format #00 00:22:13.792 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:13.792 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.792 rmmod nvme_tcp 00:22:13.792 rmmod nvme_fabrics 00:22:13.792 rmmod nvme_keyring 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2763476 ']' 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2763476 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 2763476 ']' 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 2763476 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2763476 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2763476' 00:22:13.792 killing process with pid 2763476 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 2763476 00:22:13.792 [2024-05-15 10:41:29.656609] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:13.792 10:41:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 2763476 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.359 10:41:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.892 10:41:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.892 00:22:16.892 real 0m9.589s 00:22:16.892 user 0m8.610s 00:22:16.892 sys 0m4.420s 00:22:16.892 10:41:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:16.892 10:41:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:16.892 ************************************ 00:22:16.892 END TEST nvmf_identify 00:22:16.892 ************************************ 00:22:16.892 10:41:32 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:16.892 10:41:32 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:16.892 10:41:32 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:16.892 10:41:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.892 ************************************ 00:22:16.892 START TEST nvmf_perf 00:22:16.892 ************************************ 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:16.892 * Looking for test storage... 00:22:16.892 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:16.892 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.893 10:41:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:22.168 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:22.168 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.168 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:22.168 Found net devices under 0000:27:00.0: cvl_0_0 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:22.169 Found net devices under 0000:27:00.1: cvl_0_1 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:22:22.169 00:22:22.169 --- 10.0.0.2 ping statistics --- 00:22:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.169 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:22:22.169 00:22:22.169 --- 10.0.0.1 ping statistics --- 00:22:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.169 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2767792 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2767792 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 2767792 ']' 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:22.169 10:41:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:22.169 [2024-05-15 10:41:37.945070] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:22:22.169 [2024-05-15 10:41:37.945188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.169 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.428 [2024-05-15 10:41:38.071060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.428 [2024-05-15 10:41:38.174643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.428 [2024-05-15 10:41:38.174679] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.429 [2024-05-15 10:41:38.174689] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.429 [2024-05-15 10:41:38.174697] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.429 [2024-05-15 10:41:38.174705] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.429 [2024-05-15 10:41:38.174776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.429 [2024-05-15 10:41:38.174877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.429 [2024-05-15 10:41:38.174980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.429 [2024-05-15 10:41:38.174989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:22.995 10:41:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:23.932 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:23.932 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:23.932 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:03:00.0 00:22:23.932 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.189 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:24.189 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:03:00.0 ']' 00:22:24.189 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:24.189 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:24.189 10:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.189 [2024-05-15 10:41:40.022247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.189 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.449 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:24.449 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.707 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:24.708 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:24.708 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.967 [2024-05-15 10:41:40.635225] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:24.967 [2024-05-15 10:41:40.635560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.968 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:24.968 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:03:00.0 ']' 00:22:24.968 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:22:24.968 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:24.968 10:41:40 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:22:26.346 Initializing NVMe Controllers 00:22:26.346 Attached to NVMe Controller at 0000:03:00.0 [1344:51c3] 00:22:26.346 Associating PCIE (0000:03:00.0) NSID 1 with lcore 0 00:22:26.346 Initialization complete. Launching workers. 00:22:26.346 ======================================================== 00:22:26.346 Latency(us) 00:22:26.346 Device Information : IOPS MiB/s Average min max 00:22:26.346 PCIE (0000:03:00.0) NSID 1 from core 0: 90315.06 352.79 353.99 78.75 6345.91 00:22:26.346 ======================================================== 00:22:26.346 Total : 90315.06 352.79 353.99 78.75 6345.91 00:22:26.346 00:22:26.605 10:41:42 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.605 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.983 Initializing NVMe Controllers 00:22:27.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:27.983 Initialization complete. Launching workers. 00:22:27.983 ======================================================== 00:22:27.983 Latency(us) 00:22:27.983 Device Information : IOPS MiB/s Average min max 00:22:27.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 10970.55 127.10 45803.11 00:22:27.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20017.43 5876.14 56002.21 00:22:27.983 ======================================================== 00:22:27.983 Total : 145.00 0.57 14152.56 127.10 56002.21 00:22:27.983 00:22:27.983 10:41:43 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:27.983 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.359 Initializing NVMe Controllers 00:22:29.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.359 Initialization complete. Launching workers. 00:22:29.359 ======================================================== 00:22:29.359 Latency(us) 00:22:29.359 Device Information : IOPS MiB/s Average min max 00:22:29.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10903.81 42.59 2936.44 429.16 7028.90 00:22:29.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3925.93 15.34 8206.40 6182.83 15858.31 00:22:29.360 ======================================================== 00:22:29.360 Total : 14829.75 57.93 4331.57 429.16 15858.31 00:22:29.360 00:22:29.617 10:41:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:29.617 10:41:45 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.617 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.193 Initializing NVMe Controllers 00:22:32.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.193 Controller IO queue size 128, less than required. 00:22:32.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.193 Controller IO queue size 128, less than required. 00:22:32.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.193 Initialization complete. Launching workers. 00:22:32.193 ======================================================== 00:22:32.193 Latency(us) 00:22:32.193 Device Information : IOPS MiB/s Average min max 00:22:32.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2022.96 505.74 64616.29 43022.18 134773.45 00:22:32.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.35 144.59 232526.01 77490.78 392558.72 00:22:32.193 ======================================================== 00:22:32.193 Total : 2601.30 650.33 101947.49 43022.18 392558.72 00:22:32.193 00:22:32.457 10:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:32.457 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.457 No valid NVMe controllers or AIO or URING devices found 00:22:32.457 Initializing NVMe Controllers 00:22:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.457 Controller IO queue size 128, less than required. 00:22:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.457 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:32.457 Controller IO queue size 128, less than required. 00:22:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.457 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:32.457 WARNING: Some requested NVMe devices were skipped 00:22:32.457 10:41:48 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:32.718 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.251 Initializing NVMe Controllers 00:22:35.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.251 Controller IO queue size 128, less than required. 00:22:35.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.251 Controller IO queue size 128, less than required. 00:22:35.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.251 Initialization complete. Launching workers. 00:22:35.251 00:22:35.251 ==================== 00:22:35.251 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:35.251 TCP transport: 00:22:35.251 polls: 17388 00:22:35.251 idle_polls: 8267 00:22:35.251 sock_completions: 9121 00:22:35.251 nvme_completions: 7695 00:22:35.251 submitted_requests: 11626 00:22:35.251 queued_requests: 1 00:22:35.251 00:22:35.251 ==================== 00:22:35.251 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:35.251 TCP transport: 00:22:35.251 polls: 21848 00:22:35.251 idle_polls: 11002 00:22:35.251 sock_completions: 10846 00:22:35.251 nvme_completions: 8175 00:22:35.251 submitted_requests: 12294 00:22:35.251 queued_requests: 1 00:22:35.251 ======================================================== 00:22:35.251 Latency(us) 00:22:35.251 Device Information : IOPS MiB/s Average min max 00:22:35.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1919.55 479.89 68116.77 38882.43 136729.27 00:22:35.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2039.31 509.83 63227.37 38384.99 174148.48 00:22:35.251 ======================================================== 00:22:35.251 Total : 3958.86 989.71 65598.12 38384.99 174148.48 00:22:35.251 00:22:35.251 10:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:35.251 10:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.509 rmmod nvme_tcp 00:22:35.509 rmmod nvme_fabrics 00:22:35.509 rmmod nvme_keyring 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2767792 ']' 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2767792 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 2767792 ']' 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 2767792 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2767792 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2767792' 00:22:35.509 killing process with pid 2767792 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 2767792 00:22:35.509 [2024-05-15 10:41:51.364807] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:35.509 10:41:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 2767792 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.413 10:41:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.317 10:41:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:39.317 00:22:39.317 real 0m22.541s 00:22:39.317 user 0m59.064s 00:22:39.317 sys 0m6.565s 00:22:39.317 10:41:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:39.317 10:41:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:39.317 ************************************ 00:22:39.317 END TEST nvmf_perf 00:22:39.317 ************************************ 00:22:39.317 10:41:54 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.317 10:41:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:39.317 10:41:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:39.317 10:41:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.317 ************************************ 00:22:39.317 START TEST nvmf_fio_host 00:22:39.317 ************************************ 00:22:39.317 10:41:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.317 * Looking for test storage... 00:22:39.317 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.318 10:41:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.884 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:45.885 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:45.885 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:45.885 Found net devices under 0000:27:00.0: cvl_0_0 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:45.885 Found net devices under 0000:27:00.1: cvl_0_1 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:22:45.885 00:22:45.885 --- 10.0.0.2 ping statistics --- 00:22:45.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.885 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:22:45.885 00:22:45.885 --- 10.0.0.1 ping statistics --- 00:22:45.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.885 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2774726 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2774726 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 2774726 ']' 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:45.885 10:42:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.885 [2024-05-15 10:42:00.892863] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:22:45.885 [2024-05-15 10:42:00.892976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.885 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.885 [2024-05-15 10:42:01.022362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.885 [2024-05-15 10:42:01.125183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.885 [2024-05-15 10:42:01.125226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.885 [2024-05-15 10:42:01.125237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.885 [2024-05-15 10:42:01.125248] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.885 [2024-05-15 10:42:01.125257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.885 [2024-05-15 10:42:01.125333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.885 [2024-05-15 10:42:01.125439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.885 [2024-05-15 10:42:01.125538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.885 [2024-05-15 10:42:01.125548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.885 [2024-05-15 10:42:01.614269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:45.885 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.886 Malloc1 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.886 [2024-05-15 10:42:01.723898] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:45.886 [2024-05-15 10:42:01.724205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:22:45.886 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:22:46.161 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:46.161 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:46.161 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # break 00:22:46.161 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:46.161 10:42:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:46.420 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:46.420 fio-3.35 00:22:46.420 Starting 1 thread 00:22:46.420 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.949 00:22:48.949 test: (groupid=0, jobs=1): err= 0: pid=2775260: Wed May 15 10:42:04 2024 00:22:48.949 read: IOPS=12.1k, BW=47.3MiB/s (49.6MB/s)(94.9MiB/2005msec) 00:22:48.949 slat (usec): min=2, max=137, avg= 2.96, stdev= 1.26 00:22:48.949 clat (usec): min=1981, max=9862, avg=5801.88, stdev=437.03 00:22:48.949 lat (usec): min=2007, max=9865, avg=5804.84, stdev=436.96 00:22:48.949 clat percentiles (usec): 00:22:48.949 | 1.00th=[ 4883], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:22:48.949 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:22:48.949 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6456], 00:22:48.949 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 8356], 99.95th=[ 9241], 00:22:48.949 | 99.99th=[ 9765] 00:22:48.949 bw ( KiB/s): min=47616, max=49080, per=99.96%, avg=48446.00, stdev=652.16, samples=4 00:22:48.949 iops : min=11904, max=12270, avg=12111.50, stdev=163.04, samples=4 00:22:48.949 write: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(94.5MiB/2005msec); 0 zone resets 00:22:48.949 slat (usec): min=2, max=132, avg= 3.08, stdev= 1.14 00:22:48.949 clat (usec): min=1510, max=9159, avg=4753.17, stdev=363.28 00:22:48.949 lat (usec): min=1521, max=9162, avg=4756.25, stdev=363.22 00:22:48.949 clat percentiles (usec): 00:22:48.949 | 1.00th=[ 4015], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:22:48.949 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:22:48.949 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:22:48.949 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 6915], 99.95th=[ 8094], 00:22:48.949 | 99.99th=[ 9110] 00:22:48.949 bw ( KiB/s): min=48000, max=48800, per=100.00%, avg=48276.00, stdev=369.82, samples=4 00:22:48.949 iops : min=12000, max=12200, avg=12069.00, stdev=92.46, samples=4 00:22:48.949 lat (msec) : 2=0.03%, 4=0.51%, 10=99.46% 00:22:48.949 cpu : usr=83.78%, sys=15.82%, ctx=3, majf=0, minf=1531 00:22:48.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:48.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:48.949 issued rwts: total=24293,24195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.949 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:48.949 00:22:48.949 Run status group 0 (all jobs): 00:22:48.949 READ: bw=47.3MiB/s (49.6MB/s), 47.3MiB/s-47.3MiB/s (49.6MB/s-49.6MB/s), io=94.9MiB (99.5MB), run=2005-2005msec 00:22:48.949 WRITE: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=94.5MiB (99.1MB), run=2005-2005msec 00:22:48.949 ----------------------------------------------------- 00:22:48.949 Suppressions used: 00:22:48.949 count bytes template 00:22:48.949 1 57 /usr/src/fio/parse.c 00:22:48.949 1 8 libtcmalloc_minimal.so 00:22:48.949 ----------------------------------------------------- 00:22:48.949 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # break 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:48.949 10:42:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:49.207 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:49.207 fio-3.35 00:22:49.207 Starting 1 thread 00:22:49.464 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.991 00:22:51.991 test: (groupid=0, jobs=1): err= 0: pid=2775982: Wed May 15 10:42:07 2024 00:22:51.991 read: IOPS=11.1k, BW=174MiB/s (183MB/s)(349MiB/2006msec) 00:22:51.991 slat (usec): min=2, max=103, avg= 2.94, stdev= 1.13 00:22:51.991 clat (usec): min=1965, max=13305, avg=6712.46, stdev=1585.24 00:22:51.991 lat (usec): min=1968, max=13308, avg=6715.40, stdev=1585.36 00:22:51.991 clat percentiles (usec): 00:22:51.991 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5276], 00:22:51.991 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:22:51.991 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8848], 95.00th=[ 9634], 00:22:51.991 | 99.00th=[10552], 99.50th=[11076], 99.90th=[11863], 99.95th=[12256], 00:22:51.991 | 99.99th=[12649] 00:22:51.991 bw ( KiB/s): min=86560, max=95872, per=50.98%, avg=90936.00, stdev=3929.57, samples=4 00:22:51.991 iops : min= 5410, max= 5992, avg=5683.50, stdev=245.60, samples=4 00:22:51.991 write: IOPS=6651, BW=104MiB/s (109MB/s)(185MiB/1783msec); 0 zone resets 00:22:51.991 slat (usec): min=28, max=119, avg=31.25, stdev= 3.53 00:22:51.991 clat (usec): min=1984, max=13581, avg=8219.63, stdev=1328.53 00:22:51.991 lat (usec): min=2012, max=13610, avg=8250.88, stdev=1329.06 00:22:51.991 clat percentiles (usec): 00:22:51.991 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7111], 00:22:51.991 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8455], 00:22:51.991 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[10683], 00:22:51.991 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12256], 99.95th=[12518], 00:22:51.991 | 99.99th=[13566] 00:22:51.991 bw ( KiB/s): min=90848, max=99712, per=88.87%, avg=94576.00, stdev=3782.91, samples=4 00:22:51.991 iops : min= 5678, max= 6232, avg=5911.00, stdev=236.43, samples=4 00:22:51.991 lat (msec) : 2=0.01%, 4=1.78%, 10=93.19%, 20=5.03% 00:22:51.991 cpu : usr=87.38%, sys=12.12%, ctx=11, majf=0, minf=2479 00:22:51.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:51.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.991 issued rwts: total=22362,11859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.991 00:22:51.991 Run status group 0 (all jobs): 00:22:51.991 READ: bw=174MiB/s (183MB/s), 174MiB/s-174MiB/s (183MB/s-183MB/s), io=349MiB (366MB), run=2006-2006msec 00:22:51.991 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=185MiB (194MB), run=1783-1783msec 00:22:51.991 ----------------------------------------------------- 00:22:51.991 Suppressions used: 00:22:51.991 count bytes template 00:22:51.991 1 57 /usr/src/fio/parse.c 00:22:51.991 896 86016 /usr/src/fio/iolog.c 00:22:51.991 1 8 libtcmalloc_minimal.so 00:22:51.991 ----------------------------------------------------- 00:22:51.991 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.991 rmmod nvme_tcp 00:22:51.991 rmmod nvme_fabrics 00:22:51.991 rmmod nvme_keyring 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2774726 ']' 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2774726 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 2774726 ']' 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 2774726 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2774726 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2774726' 00:22:51.991 killing process with pid 2774726 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 2774726 00:22:51.991 [2024-05-15 10:42:07.800248] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:51.991 10:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 2774726 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.556 10:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.088 10:42:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.088 00:22:55.088 real 0m15.477s 00:22:55.088 user 1m2.713s 00:22:55.088 sys 0m6.055s 00:22:55.088 10:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:55.088 10:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.088 ************************************ 00:22:55.088 END TEST nvmf_fio_host 00:22:55.088 ************************************ 00:22:55.088 10:42:10 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.088 10:42:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:55.088 10:42:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:55.088 10:42:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.088 ************************************ 00:22:55.088 START TEST nvmf_failover 00:22:55.088 ************************************ 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.088 * Looking for test storage... 00:22:55.088 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.088 10:42:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.412 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:00.413 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:00.413 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:00.413 Found net devices under 0000:27:00.0: cvl_0_0 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:00.413 Found net devices under 0000:27:00.1: cvl_0_1 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:23:00.413 00:23:00.413 --- 10.0.0.2 ping statistics --- 00:23:00.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.413 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:00.413 00:23:00.413 --- 10.0.0.1 ping statistics --- 00:23:00.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.413 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2780836 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2780836 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2780836 ']' 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:00.413 10:42:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.413 [2024-05-15 10:42:15.955654] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:23:00.413 [2024-05-15 10:42:15.955721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.413 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.413 [2024-05-15 10:42:16.046005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.413 [2024-05-15 10:42:16.140232] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.413 [2024-05-15 10:42:16.140270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.413 [2024-05-15 10:42:16.140280] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.413 [2024-05-15 10:42:16.140288] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.414 [2024-05-15 10:42:16.140295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.414 [2024-05-15 10:42:16.140437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.414 [2024-05-15 10:42:16.140466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.414 [2024-05-15 10:42:16.140478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.012 10:42:16 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:01.012 [2024-05-15 10:42:16.864267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.270 10:42:16 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:01.270 Malloc0 00:23:01.270 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:01.528 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.786 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.786 [2024-05-15 10:42:17.539538] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:01.786 [2024-05-15 10:42:17.539802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.786 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.044 [2024-05-15 10:42:17.679834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.044 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:02.044 [2024-05-15 10:42:17.820017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:02.044 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2781217 00:23:02.044 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2781217 /var/tmp/bdevperf.sock 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2781217 ']' 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:02.045 10:42:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.982 10:42:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:02.982 10:42:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:23:02.982 10:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.982 NVMe0n1 00:23:02.982 10:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.241 00:23:03.241 10:42:19 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2781490 00:23:03.241 10:42:19 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:03.241 10:42:19 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.615 10:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.615 [2024-05-15 10:42:20.236478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:04.615 10:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:07.898 10:42:23 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.898 00:23:07.898 10:42:23 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:07.898 [2024-05-15 10:42:23.665143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 [2024-05-15 10:42:23.665253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:07.898 10:42:23 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:11.186 10:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.186 [2024-05-15 10:42:26.831302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.186 10:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:12.119 10:42:27 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:12.119 [2024-05-15 10:42:27.981928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.981985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.981993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.119 [2024-05-15 10:42:27.982097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:12.376 10:42:27 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2781490 00:23:18.948 0 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2781217 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2781217 ']' 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2781217 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2781217 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2781217' 00:23:18.948 killing process with pid 2781217 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2781217 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2781217 00:23:18.948 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:18.948 [2024-05-15 10:42:17.908561] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:23:18.948 [2024-05-15 10:42:17.908682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781217 ] 00:23:18.948 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.948 [2024-05-15 10:42:18.019959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.948 [2024-05-15 10:42:18.110472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.948 Running I/O for 15 seconds... 00:23:18.948 [2024-05-15 10:42:20.238822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.238880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.238921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.238938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.238953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.238967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.238979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.238989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.948 [2024-05-15 10:42:20.239737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.948 [2024-05-15 10:42:20.239746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.239981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.239992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.949 [2024-05-15 10:42:20.240041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.949 [2024-05-15 10:42:20.240068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.949 [2024-05-15 10:42:20.240444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.240507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.949 [2024-05-15 10:42:20.240583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.949 [2024-05-15 10:42:20.240609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.949 [2024-05-15 10:42:20.240629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.949 [2024-05-15 10:42:20.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3480 is same with the state(5) to be set 00:23:18.949 [2024-05-15 10:42:20.240837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.240847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.240859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.240871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.240898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.240907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.240935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.240944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.240954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.240964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.240976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.240988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.240997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.241012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.241021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.241035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.241052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.241062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.241070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.241089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.241100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.241108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.241117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.241128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.241137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.241145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.241154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:23:18.949 [2024-05-15 10:42:20.241164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.949 [2024-05-15 10:42:20.241174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.949 [2024-05-15 10:42:20.241182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.949 [2024-05-15 10:42:20.241191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.241966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.241976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.241984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.241993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.950 [2024-05-15 10:42:20.242351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:23:18.950 [2024-05-15 10:42:20.242361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.950 [2024-05-15 10:42:20.242371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.950 [2024-05-15 10:42:20.242380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.242972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.242983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.242990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.242999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.951 [2024-05-15 10:42:20.243288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.951 [2024-05-15 10:42:20.243296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.951 [2024-05-15 10:42:20.243305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:23:18.951 [2024-05-15 10:42:20.243315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.243894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.243904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.243912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.243921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.247969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.247979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.247989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.247996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.952 [2024-05-15 10:42:20.248244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:23:18.952 [2024-05-15 10:42:20.248255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.952 [2024-05-15 10:42:20.248265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.952 [2024-05-15 10:42:20.248273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.248968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.248977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.248985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.248993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.953 [2024-05-15 10:42:20.249469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:23:18.953 [2024-05-15 10:42:20.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.953 [2024-05-15 10:42:20.249489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.953 [2024-05-15 10:42:20.249497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.954 [2024-05-15 10:42:20.249505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:23:18.954 [2024-05-15 10:42:20.249517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:20.249676] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a3e80 was disconnected and freed. reset controller. 00:23:18.954 [2024-05-15 10:42:20.249700] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:18.954 [2024-05-15 10:42:20.249715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.954 [2024-05-15 10:42:20.252755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.954 [2024-05-15 10:42:20.252797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:18.954 [2024-05-15 10:42:20.320989] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.954 [2024-05-15 10:42:23.665472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.665990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.665998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.666015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.666032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.666058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.954 [2024-05-15 10:42:23.666075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.954 [2024-05-15 10:42:23.666250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.954 [2024-05-15 10:42:23.666259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.666985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.666992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.955 [2024-05-15 10:42:23.667201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.955 [2024-05-15 10:42:23.667209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:23.667480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44528 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44536 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44544 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44552 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44560 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44568 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44576 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44584 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44592 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44600 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.667973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43832 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.667980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.667988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.956 [2024-05-15 10:42:23.667994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.956 [2024-05-15 10:42:23.668001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43840 len:8 PRP1 0x0 PRP2 0x0 00:23:18.956 [2024-05-15 10:42:23.668009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.668130] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a4100 was disconnected and freed. reset controller. 00:23:18.956 [2024-05-15 10:42:23.668146] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:18.956 [2024-05-15 10:42:23.668174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.956 [2024-05-15 10:42:23.668186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.668199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.956 [2024-05-15 10:42:23.668208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.668217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.956 [2024-05-15 10:42:23.668225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.668234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.956 [2024-05-15 10:42:23.668243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:23.668252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.956 [2024-05-15 10:42:23.668299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:18.956 [2024-05-15 10:42:23.671098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.956 [2024-05-15 10:42:23.699145] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.956 [2024-05-15 10:42:27.982376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.956 [2024-05-15 10:42:27.982428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:27.982454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:27.982463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:27.982473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:27.982482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:27.982492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:27.982505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:27.982514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.956 [2024-05-15 10:42:27.982522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.956 [2024-05-15 10:42:27.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.982873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.982987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.982996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.983024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.957 [2024-05-15 10:42:27.983042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.957 [2024-05-15 10:42:27.983552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.957 [2024-05-15 10:42:27.983560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.958 [2024-05-15 10:42:27.983569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.958 [2024-05-15 10:42:27.983577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.958 [2024-05-15 10:42:27.983586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.958 [2024-05-15 10:42:27.983594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.958 [2024-05-15 10:42:27.983603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.958 [2024-05-15 10:42:27.983611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.958 [2024-05-15 10:42:27.983620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.958 [2024-05-15 10:42:27.983630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.958 [2024-05-15 10:42:27.983640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.958 [2024-05-15 10:42:27.983648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.958 [2024-05-15 10:42:27.983657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.959 [2024-05-15 10:42:27.983888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.983927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66096 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.983935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.983957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.983965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66104 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.983974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.983983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.983989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.983996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66112 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66120 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66128 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66136 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66144 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66152 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66160 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66168 len:8 PRP1 0x0 PRP2 0x0 00:23:18.959 [2024-05-15 10:42:27.984213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.959 [2024-05-15 10:42:27.984221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.959 [2024-05-15 10:42:27.984227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.959 [2024-05-15 10:42:27.984234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66176 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66184 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66192 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66200 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66208 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66216 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66224 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66232 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66240 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66248 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66256 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66264 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66272 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66280 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66288 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66296 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66368 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66376 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.984977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.984983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.984990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66384 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.984997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.985007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.985013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.985020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66392 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.985027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.985035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.985041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.985051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66400 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.985059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.985067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.985073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.985083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.985091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.985099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.985105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.985111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65544 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.985119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.985127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.985133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.985143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65552 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.985151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.985159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.985165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.985172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65560 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.989419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.989464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.989476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.989487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65568 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.989497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.989505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.989512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.989520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65576 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.989533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.960 [2024-05-15 10:42:27.989541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.960 [2024-05-15 10:42:27.989548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.960 [2024-05-15 10:42:27.989555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65584 len:8 PRP1 0x0 PRP2 0x0 00:23:18.960 [2024-05-15 10:42:27.989564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.961 [2024-05-15 10:42:27.989692] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a4600 was disconnected and freed. reset controller. 00:23:18.961 [2024-05-15 10:42:27.989711] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:18.961 [2024-05-15 10:42:27.989753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.961 [2024-05-15 10:42:27.989774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.961 [2024-05-15 10:42:27.989789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.961 [2024-05-15 10:42:27.989797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.961 [2024-05-15 10:42:27.989806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.961 [2024-05-15 10:42:27.989813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.961 [2024-05-15 10:42:27.989822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.961 [2024-05-15 10:42:27.989829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.961 [2024-05-15 10:42:27.989838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.961 [2024-05-15 10:42:27.989894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:18.961 [2024-05-15 10:42:27.992543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.961 [2024-05-15 10:42:28.025885] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.961 00:23:18.961 Latency(us) 00:23:18.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.961 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:18.961 Verification LBA range: start 0x0 length 0x4000 00:23:18.961 NVMe0n1 : 15.01 11627.95 45.42 415.03 0.00 10607.54 599.31 16349.51 00:23:18.961 =================================================================================================================== 00:23:18.961 Total : 11627.95 45.42 415.03 0.00 10607.54 599.31 16349.51 00:23:18.961 Received shutdown signal, test time was about 15.000000 seconds 00:23:18.961 00:23:18.961 Latency(us) 00:23:18.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.961 =================================================================================================================== 00:23:18.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2784453 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2784453 /var/tmp/bdevperf.sock 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2784453 ']' 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:18.961 10:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:19.894 10:42:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:19.894 10:42:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:23:19.894 10:42:35 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.894 [2024-05-15 10:42:35.569585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.894 10:42:35 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:19.894 [2024-05-15 10:42:35.705631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:19.894 10:42:35 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.465 NVMe0n1 00:23:20.465 10:42:36 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.725 00:23:20.725 10:42:36 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.984 00:23:20.984 10:42:36 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.984 10:42:36 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:21.241 10:42:36 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.241 10:42:37 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:24.521 10:42:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.521 10:42:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:24.521 10:42:40 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2785375 00:23:24.521 10:42:40 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2785375 00:23:24.521 10:42:40 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.488 0 00:23:25.488 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.488 [2024-05-15 10:42:34.730761] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:23:25.488 [2024-05-15 10:42:34.730944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784453 ] 00:23:25.488 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.488 [2024-05-15 10:42:34.861226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.488 [2024-05-15 10:42:34.958378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.488 [2024-05-15 10:42:37.016579] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:25.488 [2024-05-15 10:42:37.016643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.488 [2024-05-15 10:42:37.016657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.488 [2024-05-15 10:42:37.016669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.488 [2024-05-15 10:42:37.016677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.488 [2024-05-15 10:42:37.016685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.488 [2024-05-15 10:42:37.016693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.488 [2024-05-15 10:42:37.016701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.488 [2024-05-15 10:42:37.016709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.488 [2024-05-15 10:42:37.016718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.488 [2024-05-15 10:42:37.016761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:25.488 [2024-05-15 10:42:37.016781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:25.488 [2024-05-15 10:42:37.111303] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:25.488 Running I/O for 1 seconds... 00:23:25.488 00:23:25.488 Latency(us) 00:23:25.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.488 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:25.488 Verification LBA range: start 0x0 length 0x4000 00:23:25.488 NVMe0n1 : 1.01 11505.08 44.94 0.00 0.00 11084.35 1021.84 15038.79 00:23:25.488 =================================================================================================================== 00:23:25.488 Total : 11505.08 44.94 0.00 0.00 11084.35 1021.84 15038.79 00:23:25.488 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.488 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:25.747 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.747 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.747 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:26.007 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.266 10:42:41 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:29.558 10:42:44 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.558 10:42:44 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2784453 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2784453 ']' 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2784453 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2784453 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2784453' 00:23:29.558 killing process with pid 2784453 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2784453 00:23:29.558 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2784453 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.818 rmmod nvme_tcp 00:23:29.818 rmmod nvme_fabrics 00:23:29.818 rmmod nvme_keyring 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2780836 ']' 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2780836 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2780836 ']' 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2780836 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:29.818 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2780836 00:23:30.079 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:30.079 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:30.079 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2780836' 00:23:30.079 killing process with pid 2780836 00:23:30.079 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2780836 00:23:30.079 [2024-05-15 10:42:45.716240] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:30.079 10:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2780836 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.648 10:42:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.550 10:42:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:32.550 00:23:32.550 real 0m37.863s 00:23:32.550 user 2m1.212s 00:23:32.550 sys 0m6.670s 00:23:32.550 10:42:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:32.550 10:42:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.550 ************************************ 00:23:32.550 END TEST nvmf_failover 00:23:32.550 ************************************ 00:23:32.550 10:42:48 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:32.550 10:42:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:32.550 10:42:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:32.550 10:42:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.550 ************************************ 00:23:32.550 START TEST nvmf_host_discovery 00:23:32.550 ************************************ 00:23:32.550 10:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:32.550 * Looking for test storage... 00:23:32.808 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.808 10:42:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:32.809 10:42:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.080 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:38.081 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:38.081 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:38.081 Found net devices under 0000:27:00.0: cvl_0_0 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:38.081 Found net devices under 0000:27:00.1: cvl_0_1 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:23:38.081 00:23:38.081 --- 10.0.0.2 ping statistics --- 00:23:38.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.081 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:23:38.081 00:23:38.081 --- 10.0.0.1 ping statistics --- 00:23:38.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.081 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.081 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2790440 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2790440 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2790440 ']' 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.082 10:42:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.082 [2024-05-15 10:42:53.862123] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:23:38.082 [2024-05-15 10:42:53.862222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.082 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.341 [2024-05-15 10:42:53.979833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.341 [2024-05-15 10:42:54.071383] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.341 [2024-05-15 10:42:54.071418] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.341 [2024-05-15 10:42:54.071427] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.341 [2024-05-15 10:42:54.071436] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.341 [2024-05-15 10:42:54.071443] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.341 [2024-05-15 10:42:54.071475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 [2024-05-15 10:42:54.585173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 [2024-05-15 10:42:54.593135] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:38.910 [2024-05-15 10:42:54.593415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 null0 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 null1 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2790505 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2790505 /tmp/host.sock 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2790505 ']' 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:38.910 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.910 10:42:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:38.910 [2024-05-15 10:42:54.675701] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:23:38.910 [2024-05-15 10:42:54.675780] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790505 ] 00:23:38.910 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.910 [2024-05-15 10:42:54.767846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.168 [2024-05-15 10:42:54.861908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.737 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.998 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:39.998 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.999 [2024-05-15 10:42:55.641563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:23:39.999 10:42:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:23:40.569 [2024-05-15 10:42:56.440057] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.569 [2024-05-15 10:42:56.440092] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.569 [2024-05-15 10:42:56.440122] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.829 [2024-05-15 10:42:56.572203] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:40.829 [2024-05-15 10:42:56.629559] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:40.829 [2024-05-15 10:42:56.629590] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:41.091 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.353 10:42:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.618 [2024-05-15 10:42:57.290096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.618 [2024-05-15 10:42:57.290946] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.618 [2024-05-15 10:42:57.290980] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.618 [2024-05-15 10:42:57.420400] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.618 10:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:23:41.877 [2024-05-15 10:42:57.521933] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.877 [2024-05-15 10:42:57.521961] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.877 [2024-05-15 10:42:57.521970] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.817 [2024-05-15 10:42:58.515360] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:42.817 [2024-05-15 10:42:58.515392] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:42.817 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:42.818 [2024-05-15 10:42:58.520932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.818 [2024-05-15 10:42:58.520960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.818 [2024-05-15 10:42:58.520972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.818 [2024-05-15 10:42:58.520981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.818 [2024-05-15 10:42:58.520990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.818 [2024-05-15 10:42:58.521003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.818 [2024-05-15 10:42:58.521011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.818 [2024-05-15 10:42:58.521019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.818 [2024-05-15 10:42:58.521027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:42.818 [2024-05-15 10:42:58.530923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.818 [2024-05-15 10:42:58.540931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 [2024-05-15 10:42:58.541203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.541425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.541437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.541448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.541463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 [2024-05-15 10:42:58.541484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.541494] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.541505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.541526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-05-15 10:42:58.550976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 [2024-05-15 10:42:58.551222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.551565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.551576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.551585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.551597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 [2024-05-15 10:42:58.551620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.551627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.551636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.551652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.818 [2024-05-15 10:42:58.561017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:42.818 [2024-05-15 10:42:58.561375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:42.818 [2024-05-15 10:42:58.561619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.561637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.561649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.561663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:42.818 [2024-05-15 10:42:58.561689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.561700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.561710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.561723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.818 [2024-05-15 10:42:58.571065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 [2024-05-15 10:42:58.571187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.571559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.571570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.571578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.571590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 [2024-05-15 10:42:58.571601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.571609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.571617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.572344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-05-15 10:42:58.581104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 [2024-05-15 10:42:58.581485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.581867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.581878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.581886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.581900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 [2024-05-15 10:42:58.581917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.581924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.581933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.581943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.818 [2024-05-15 10:42:58.591141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 [2024-05-15 10:42:58.591551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.591781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.591791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.591800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.591812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 [2024-05-15 10:42:58.591827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.591834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.591842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.591853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-05-15 10:42:58.601174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.818 [2024-05-15 10:42:58.601514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.601835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-05-15 10:42:58.601845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-05-15 10:42:58.601854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:23:42.818 [2024-05-15 10:42:58.601866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:23:42.818 [2024-05-15 10:42:58.601877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:42.818 [2024-05-15 10:42:58.601886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:42.818 [2024-05-15 10:42:58.601895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:42.818 [2024-05-15 10:42:58.601907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-05-15 10:42:58.603961] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:42.818 [2024-05-15 10:42:58.603988] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:42.818 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.819 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.079 10:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 [2024-05-15 10:42:59.871441] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:44.018 [2024-05-15 10:42:59.871467] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:44.018 [2024-05-15 10:42:59.871492] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.275 [2024-05-15 10:42:59.959538] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:44.533 [2024-05-15 10:43:00.275949] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:44.533 [2024-05-15 10:43:00.276005] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.533 request: 00:23:44.533 { 00:23:44.533 "name": "nvme", 00:23:44.533 "trtype": "tcp", 00:23:44.533 "traddr": "10.0.0.2", 00:23:44.533 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.533 "adrfam": "ipv4", 00:23:44.533 "trsvcid": "8009", 00:23:44.533 "wait_for_attach": true, 00:23:44.533 "method": "bdev_nvme_start_discovery", 00:23:44.533 "req_id": 1 00:23:44.533 } 00:23:44.533 Got JSON-RPC error response 00:23:44.533 response: 00:23:44.533 { 00:23:44.533 "code": -17, 00:23:44.533 "message": "File exists" 00:23:44.533 } 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.533 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.534 request: 00:23:44.534 { 00:23:44.534 "name": "nvme_second", 00:23:44.534 "trtype": "tcp", 00:23:44.534 "traddr": "10.0.0.2", 00:23:44.534 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.534 "adrfam": "ipv4", 00:23:44.534 "trsvcid": "8009", 00:23:44.534 "wait_for_attach": true, 00:23:44.534 "method": "bdev_nvme_start_discovery", 00:23:44.534 "req_id": 1 00:23:44.534 } 00:23:44.534 Got JSON-RPC error response 00:23:44.534 response: 00:23:44.534 { 00:23:44.534 "code": -17, 00:23:44.534 "message": "File exists" 00:23:44.534 } 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.534 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.791 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:44.791 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:44.791 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.792 10:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.731 [2024-05-15 10:43:01.472505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.731 [2024-05-15 10:43:01.472808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.731 [2024-05-15 10:43:01.472822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a4880 with addr=10.0.0.2, port=8010 00:23:45.731 [2024-05-15 10:43:01.472854] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:45.731 [2024-05-15 10:43:01.472866] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:45.731 [2024-05-15 10:43:01.472877] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:46.667 [2024-05-15 10:43:02.472513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.667 [2024-05-15 10:43:02.472792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.667 [2024-05-15 10:43:02.472804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a4b00 with addr=10.0.0.2, port=8010 00:23:46.667 [2024-05-15 10:43:02.472830] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:46.667 [2024-05-15 10:43:02.472838] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:46.667 [2024-05-15 10:43:02.472847] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:47.652 [2024-05-15 10:43:03.472208] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:47.652 request: 00:23:47.652 { 00:23:47.652 "name": "nvme_second", 00:23:47.652 "trtype": "tcp", 00:23:47.652 "traddr": "10.0.0.2", 00:23:47.652 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:47.652 "adrfam": "ipv4", 00:23:47.652 "trsvcid": "8010", 00:23:47.652 "attach_timeout_ms": 3000, 00:23:47.652 "method": "bdev_nvme_start_discovery", 00:23:47.652 "req_id": 1 00:23:47.652 } 00:23:47.652 Got JSON-RPC error response 00:23:47.652 response: 00:23:47.652 { 00:23:47.652 "code": -110, 00:23:47.652 "message": "Connection timed out" 00:23:47.652 } 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2790505 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.652 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.913 rmmod nvme_tcp 00:23:47.913 rmmod nvme_fabrics 00:23:47.913 rmmod nvme_keyring 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2790440 ']' 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2790440 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 2790440 ']' 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 2790440 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2790440 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2790440' 00:23:47.913 killing process with pid 2790440 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 2790440 00:23:47.913 [2024-05-15 10:43:03.631823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:47.913 10:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 2790440 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.479 10:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.387 00:23:50.387 real 0m17.759s 00:23:50.387 user 0m21.893s 00:23:50.387 sys 0m5.277s 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.387 ************************************ 00:23:50.387 END TEST nvmf_host_discovery 00:23:50.387 ************************************ 00:23:50.387 10:43:06 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.387 10:43:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:50.387 10:43:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:50.387 10:43:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.387 ************************************ 00:23:50.387 START TEST nvmf_host_multipath_status 00:23:50.387 ************************************ 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.387 * Looking for test storage... 00:23:50.387 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.387 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:50.388 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.645 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:50.645 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.645 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.645 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.645 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/bpftrace.sh 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.646 10:43:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:55.917 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:55.917 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:55.917 Found net devices under 0000:27:00.0: cvl_0_0 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.917 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:55.918 Found net devices under 0000:27:00.1: cvl_0_1 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:55.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:23:55.918 00:23:55.918 --- 10.0.0.2 ping statistics --- 00:23:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.918 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:23:55.918 00:23:55.918 --- 10.0.0.1 ping statistics --- 00:23:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.918 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2796289 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2796289 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2796289 ']' 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.918 10:43:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:55.918 [2024-05-15 10:43:11.672568] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:23:55.918 [2024-05-15 10:43:11.672691] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.918 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.176 [2024-05-15 10:43:11.799537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:56.176 [2024-05-15 10:43:11.900083] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.176 [2024-05-15 10:43:11.900125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.176 [2024-05-15 10:43:11.900134] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.176 [2024-05-15 10:43:11.900144] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.176 [2024-05-15 10:43:11.900154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.176 [2024-05-15 10:43:11.900222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.176 [2024-05-15 10:43:11.900226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2796289 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:56.744 [2024-05-15 10:43:12.498446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.744 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.004 Malloc0 00:23:57.004 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:57.004 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.264 10:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.264 [2024-05-15 10:43:13.090267] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:57.264 [2024-05-15 10:43:13.090604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.264 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:57.524 [2024-05-15 10:43:13.238548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2796618 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2796618 /var/tmp/bdevperf.sock 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2796618 ']' 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:57.524 10:43:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:58.457 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:58.457 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:23:58.457 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:58.457 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:58.716 Nvme0n1 00:23:58.716 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:58.976 Nvme0n1 00:23:58.976 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:58.976 10:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:01.514 10:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:01.514 10:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:01.514 10:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.514 10:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:02.447 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:02.447 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:02.447 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.447 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.447 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.447 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:02.448 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.448 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.707 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.969 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.969 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.969 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.969 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.230 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.231 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.231 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.231 10:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.231 10:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.231 10:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:03.231 10:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.492 10:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:03.492 10:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.869 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.130 10:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:05.391 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.649 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:05.907 10:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.843 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:07.102 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:07.103 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:07.103 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.103 10:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.361 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:07.619 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.877 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:08.135 10:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.145 10:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.406 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.665 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.665 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.665 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.665 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:09.923 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:10.180 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.180 10:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:11.119 10:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:11.119 10:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:11.119 10:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.119 10:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.380 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.380 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.380 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.380 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.639 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.639 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.639 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.639 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.639 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.639 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.640 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.640 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.898 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.156 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.156 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:12.156 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:12.156 10:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.415 10:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:13.354 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:13.354 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:13.354 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.354 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.613 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.871 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.129 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.129 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.129 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.130 10:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.387 10:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.387 10:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:14.387 10:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:14.387 10:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:14.646 10:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.646 10:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.017 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.018 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.018 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.018 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.018 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.018 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.018 10:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.275 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.275 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.275 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.275 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.275 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.275 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.533 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.533 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.533 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.533 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:16.533 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:16.792 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:16.792 10:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:17.733 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:17.733 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:17.733 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.733 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.990 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.990 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:17.990 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.990 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.247 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.248 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.248 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.248 10:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.248 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.248 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.248 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:18.248 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.507 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.768 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.768 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:18.768 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.768 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:19.028 10:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:19.969 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:19.969 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.969 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.969 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:20.228 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.228 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:20.228 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.228 10:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.228 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.228 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.228 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.228 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.486 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:20.745 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.005 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:21.265 10:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:22.204 10:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:22.204 10:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.204 10:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.204 10:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.204 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.204 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:22.204 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.204 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.462 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.720 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.720 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.720 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.720 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2796618 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2796618 ']' 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2796618 00:24:22.979 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2796618 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2796618' 00:24:22.980 killing process with pid 2796618 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2796618 00:24:22.980 10:43:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2796618 00:24:23.238 Connection closed with partial response: 00:24:23.238 00:24:23.238 00:24:23.522 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2796618 00:24:23.522 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:23.522 [2024-05-15 10:43:13.335928] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:24:23.522 [2024-05-15 10:43:13.336087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796618 ] 00:24:23.522 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.522 [2024-05-15 10:43:13.463962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.522 [2024-05-15 10:43:13.556957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.522 Running I/O for 90 seconds... 00:24:23.522 [2024-05-15 10:43:25.803923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.803981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.522 [2024-05-15 10:43:25.804308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.522 [2024-05-15 10:43:25.804321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.523 [2024-05-15 10:43:25.804893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.804928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.804936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.523 [2024-05-15 10:43:25.805493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.523 [2024-05-15 10:43:25.805501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.805522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.805543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.805565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.805587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.805957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.805980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.805994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.524 [2024-05-15 10:43:25.806492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.524 [2024-05-15 10:43:25.806856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.524 [2024-05-15 10:43:25.806863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.806877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.806885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.806899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.806906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.806920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.806928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.806942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.806949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.806963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.806971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.806986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.806994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.807198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.807219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.807241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.525 [2024-05-15 10:43:25.807569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.807591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.807613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.807981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.807989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.808009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.808017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.808030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.808038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.808056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.808064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.808077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.808085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.525 [2024-05-15 10:43:25.808099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.525 [2024-05-15 10:43:25.808106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.526 [2024-05-15 10:43:25.808695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.526 [2024-05-15 10:43:25.808845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.526 [2024-05-15 10:43:25.808860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.808868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.808883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.808892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.808906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.808915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.808930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.808938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.808955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.808964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.808979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.808987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.809989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.809997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.527 [2024-05-15 10:43:25.810211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.527 [2024-05-15 10:43:25.810219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.810902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.810911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.528 [2024-05-15 10:43:25.811398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.528 [2024-05-15 10:43:25.811422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.528 [2024-05-15 10:43:25.811445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.528 [2024-05-15 10:43:25.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.528 [2024-05-15 10:43:25.811585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.811798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.811978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.811986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-05-15 10:43:25.812454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.812478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.812502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.812526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.529 [2024-05-15 10:43:25.812550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.529 [2024-05-15 10:43:25.812565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.530 [2024-05-15 10:43:25.812574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.530 [2024-05-15 10:43:25.812597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.530 [2024-05-15 10:43:25.812620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.530 [2024-05-15 10:43:25.812643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.812938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.812946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.530 [2024-05-15 10:43:25.813951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-05-15 10:43:25.813958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.813972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.813979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.813993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.531 [2024-05-15 10:43:25.814158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.531 [2024-05-15 10:43:25.814673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.531 [2024-05-15 10:43:25.814687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.814695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.814708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.814716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.814730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.814737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.814751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.814758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.532 [2024-05-15 10:43:25.815683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.532 [2024-05-15 10:43:25.815947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.532 [2024-05-15 10:43:25.815961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.815969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.815982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.815990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.533 [2024-05-15 10:43:25.816444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.816628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.816635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.533 [2024-05-15 10:43:25.817282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.533 [2024-05-15 10:43:25.817295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.534 [2024-05-15 10:43:25.817878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.817900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.817921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.817943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.817964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.817980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.817988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.534 [2024-05-15 10:43:25.818134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.534 [2024-05-15 10:43:25.818142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.818979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.818993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.535 [2024-05-15 10:43:25.819048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.535 [2024-05-15 10:43:25.819070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.535 [2024-05-15 10:43:25.819091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.535 [2024-05-15 10:43:25.819322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.535 [2024-05-15 10:43:25.819330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.819353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.819374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.819412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.819434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.819990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.819997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.820018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-05-15 10:43:25.820193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.536 [2024-05-15 10:43:25.820207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.536 [2024-05-15 10:43:25.820214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.820227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.820234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.820247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.820255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.820268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.820276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.824448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.824484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.824502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.824515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.824530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.824538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.824554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.824563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.537 [2024-05-15 10:43:25.825844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.537 [2024-05-15 10:43:25.825859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.825873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.825887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.825895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.825910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.825917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.825933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.825941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.825955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.825962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.825977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.825985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.826553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.826560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.827105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.827137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.827160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.827183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.827206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.827227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.827249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.538 [2024-05-15 10:43:25.827271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-05-15 10:43:25.827295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.538 [2024-05-15 10:43:25.827311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.539 [2024-05-15 10:43:25.827604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.827979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.827992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.539 [2024-05-15 10:43:25.828190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.539 [2024-05-15 10:43:25.828197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.540 [2024-05-15 10:43:25.828373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.828668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.828676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.540 [2024-05-15 10:43:25.829504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.540 [2024-05-15 10:43:25.829512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.541 [2024-05-15 10:43:25.829921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.829942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.829965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.829979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.829987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.541 [2024-05-15 10:43:25.830413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.541 [2024-05-15 10:43:25.830427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.830448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.830470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.830916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.830942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.830965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.830986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.830995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.542 [2024-05-15 10:43:25.831447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.542 [2024-05-15 10:43:25.831692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.542 [2024-05-15 10:43:25.831706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.831981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.831995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.543 [2024-05-15 10:43:25.832228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.832509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.833067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.833076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.833092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.833100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.833114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.833122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.833136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.833143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.833157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.833165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.543 [2024-05-15 10:43:25.833180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.543 [2024-05-15 10:43:25.833188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.544 [2024-05-15 10:43:25.833736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.833985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.833995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.544 [2024-05-15 10:43:25.834009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.544 [2024-05-15 10:43:25.834016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.545 [2024-05-15 10:43:25.834893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.545 [2024-05-15 10:43:25.834915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.545 [2024-05-15 10:43:25.834938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.834984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.834999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.545 [2024-05-15 10:43:25.835281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.545 [2024-05-15 10:43:25.835303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.545 [2024-05-15 10:43:25.835317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.545 [2024-05-15 10:43:25.835324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.835880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.835902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.835924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.835946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.835968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.835982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.835990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.836011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.836033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.546 [2024-05-15 10:43:25.836061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.546 [2024-05-15 10:43:25.836250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.546 [2024-05-15 10:43:25.836258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.836988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.836996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.547 [2024-05-15 10:43:25.837548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.547 [2024-05-15 10:43:25.837570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.547 [2024-05-15 10:43:25.837585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.837981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.837989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.548 [2024-05-15 10:43:25.838720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.548 [2024-05-15 10:43:25.838741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.548 [2024-05-15 10:43:25.838763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.548 [2024-05-15 10:43:25.838874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.548 [2024-05-15 10:43:25.838889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.838896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.838910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.838918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.838933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.838941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.838958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.838966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.838980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.838988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.549 [2024-05-15 10:43:25.839686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.549 [2024-05-15 10:43:25.839811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.549 [2024-05-15 10:43:25.839819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.550 [2024-05-15 10:43:25.839841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.550 [2024-05-15 10:43:25.839862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.839884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.839905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.839927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.839948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.839969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.839983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.839991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.840986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.840999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.550 [2024-05-15 10:43:25.841364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.550 [2024-05-15 10:43:25.841379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.841843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.841850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.551 [2024-05-15 10:43:25.842546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.551 [2024-05-15 10:43:25.842567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.551 [2024-05-15 10:43:25.842592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.551 [2024-05-15 10:43:25.842839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.551 [2024-05-15 10:43:25.842853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.842861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.842875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.842882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.842896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.842905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.842919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.842929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.842942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.842951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.842965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.842973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.842987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.842994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.843526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.843691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.552 [2024-05-15 10:43:25.843699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.552 [2024-05-15 10:43:25.847353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.552 [2024-05-15 10:43:25.847385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.847403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.847412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.847427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.847435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.847450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.847458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.847472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.847480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.847494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.847502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.847517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.847525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.553 [2024-05-15 10:43:25.848983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.848997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.553 [2024-05-15 10:43:25.849140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.553 [2024-05-15 10:43:25.849154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.849986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.849994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.554 [2024-05-15 10:43:25.850552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.554 [2024-05-15 10:43:25.850717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.554 [2024-05-15 10:43:25.850725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.850983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.850997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.555 [2024-05-15 10:43:25.851334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.851464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.851995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.555 [2024-05-15 10:43:25.852357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.555 [2024-05-15 10:43:25.852370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.556 [2024-05-15 10:43:25.852812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.852979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.852987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.556 [2024-05-15 10:43:25.853836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.556 [2024-05-15 10:43:25.853850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.853858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.853872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.853880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.853894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.853901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.853915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.853923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.853937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.853944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.853958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.853966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.853980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.853987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.557 [2024-05-15 10:43:25.854952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.557 [2024-05-15 10:43:25.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.557 [2024-05-15 10:43:25.854974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.854988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.854995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.855017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.855038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.855065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.855087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.855109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.855131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.855991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.855998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.558 [2024-05-15 10:43:25.856613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.558 [2024-05-15 10:43:25.856636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.558 [2024-05-15 10:43:25.856650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.856981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.856989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.559 [2024-05-15 10:43:25.857777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.559 [2024-05-15 10:43:25.857799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.559 [2024-05-15 10:43:25.857820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.857976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.857990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.559 [2024-05-15 10:43:25.858134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.559 [2024-05-15 10:43:25.858149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.560 [2024-05-15 10:43:25.858922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.560 [2024-05-15 10:43:25.858943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.560 [2024-05-15 10:43:25.858957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.858964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.858978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.858985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.859980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.859988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.561 [2024-05-15 10:43:25.860380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.561 [2024-05-15 10:43:25.860395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.562 [2024-05-15 10:43:25.860402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.860959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.860968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.861280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.861297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.562 [2024-05-15 10:43:25.861304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.865879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.562 [2024-05-15 10:43:25.865893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.865917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.562 [2024-05-15 10:43:25.865929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.865946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.865954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.865971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.865980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.865997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.866004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.866021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.866029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.866063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.866072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.866089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.866096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.866113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.866121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.866137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.562 [2024-05-15 10:43:25.866145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.562 [2024-05-15 10:43:25.866161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.866326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.866972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.866983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.563 [2024-05-15 10:43:25.867188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.867214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.563 [2024-05-15 10:43:25.867240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.563 [2024-05-15 10:43:25.867259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:25.867267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:25.867301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:25.867334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:25.867360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:25.867387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:25.867421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:25.867450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:25.867476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:25.867512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:25.867536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:25.867544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.893503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:36.893527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.893543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:36.893551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.564 [2024-05-15 10:43:36.894629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.564 [2024-05-15 10:43:36.894973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.564 [2024-05-15 10:43:36.894980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.894995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.565 [2024-05-15 10:43:36.895297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.565 [2024-05-15 10:43:36.895580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.565 [2024-05-15 10:43:36.895594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.565 [2024-05-15 10:43:36.895602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.565 Received shutdown signal, test time was about 23.897975 seconds 00:24:23.565 00:24:23.565 Latency(us) 00:24:23.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.565 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.565 Verification LBA range: start 0x0 length 0x4000 00:24:23.565 Nvme0n1 : 23.90 10921.68 42.66 0.00 0.00 11698.99 745.90 3090539.79 00:24:23.565 =================================================================================================================== 00:24:23.565 Total : 10921.68 42.66 0.00 0.00 11698.99 745.90 3090539.79 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.565 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.565 rmmod nvme_tcp 00:24:23.565 rmmod nvme_fabrics 00:24:23.565 rmmod nvme_keyring 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2796289 ']' 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2796289 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2796289 ']' 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2796289 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2796289 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2796289' 00:24:23.824 killing process with pid 2796289 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2796289 00:24:23.824 [2024-05-15 10:43:39.443113] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:23.824 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2796289 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.388 10:43:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.292 10:43:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.292 00:24:26.292 real 0m35.831s 00:24:26.292 user 1m33.603s 00:24:26.292 sys 0m8.735s 00:24:26.292 10:43:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:26.292 10:43:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:26.292 ************************************ 00:24:26.292 END TEST nvmf_host_multipath_status 00:24:26.292 ************************************ 00:24:26.292 10:43:42 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:26.292 10:43:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:26.292 10:43:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:26.292 10:43:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:26.292 ************************************ 00:24:26.292 START TEST nvmf_discovery_remove_ifc 00:24:26.292 ************************************ 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:26.292 * Looking for test storage... 00:24:26.292 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:26.292 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.293 10:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:31.561 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:31.561 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:31.561 Found net devices under 0000:27:00.0: cvl_0_0 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.561 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:31.562 Found net devices under 0000:27:00.1: cvl_0_1 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.562 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:24:31.822 00:24:31.822 --- 10.0.0.2 ping statistics --- 00:24:31.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.822 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:24:31.822 00:24:31.822 --- 10.0.0.1 ping statistics --- 00:24:31.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.822 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.822 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2805718 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2805718 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2805718 ']' 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:31.823 10:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.823 [2024-05-15 10:43:47.607743] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:24:31.823 [2024-05-15 10:43:47.607867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.081 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.081 [2024-05-15 10:43:47.750954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.081 [2024-05-15 10:43:47.844140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.081 [2024-05-15 10:43:47.844182] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.081 [2024-05-15 10:43:47.844192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.081 [2024-05-15 10:43:47.844202] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.081 [2024-05-15 10:43:47.844209] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.081 [2024-05-15 10:43:47.844241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.651 [2024-05-15 10:43:48.368156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.651 [2024-05-15 10:43:48.376097] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:32.651 [2024-05-15 10:43:48.376394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:32.651 null0 00:24:32.651 [2024-05-15 10:43:48.408218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2806019 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2806019 /tmp/host.sock 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2806019 ']' 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:32.651 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.651 10:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:32.651 [2024-05-15 10:43:48.507953] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:24:32.651 [2024-05-15 10:43:48.508071] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806019 ] 00:24:32.912 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.912 [2024-05-15 10:43:48.623143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.912 [2024-05-15 10:43:48.715426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.479 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.737 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:33.737 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.737 10:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.675 [2024-05-15 10:43:50.372532] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:34.675 [2024-05-15 10:43:50.372566] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:34.675 [2024-05-15 10:43:50.372599] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:34.675 [2024-05-15 10:43:50.460649] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:34.933 [2024-05-15 10:43:50.563934] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:34.933 [2024-05-15 10:43:50.564005] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:34.933 [2024-05-15 10:43:50.564048] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:34.933 [2024-05-15 10:43:50.564070] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:34.933 [2024-05-15 10:43:50.564103] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:34.933 [2024-05-15 10:43:50.612065] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a3c00 was disconnected and freed. delete nvme_qpair. 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.933 10:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.310 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.311 10:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.243 10:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.238 10:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.172 10:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.110 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.371 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.371 10:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.371 [2024-05-15 10:43:56.001833] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:40.371 [2024-05-15 10:43:56.001895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.371 [2024-05-15 10:43:56.001911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.371 [2024-05-15 10:43:56.001924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.371 [2024-05-15 10:43:56.001933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.371 [2024-05-15 10:43:56.001949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.371 [2024-05-15 10:43:56.001957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.371 [2024-05-15 10:43:56.001966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.371 [2024-05-15 10:43:56.001974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.371 [2024-05-15 10:43:56.001983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.371 [2024-05-15 10:43:56.001992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.371 [2024-05-15 10:43:56.002000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3980 is same with the state(5) to be set 00:24:40.371 [2024-05-15 10:43:56.011826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:24:40.371 [2024-05-15 10:43:56.021846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.307 10:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.307 [2024-05-15 10:43:57.085105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:42.249 [2024-05-15 10:43:58.109088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:42.249 [2024-05-15 10:43:58.109170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3980 with addr=10.0.0.2, port=4420 00:24:42.249 [2024-05-15 10:43:58.109204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3980 is same with the state(5) to be set 00:24:42.249 [2024-05-15 10:43:58.109883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:24:42.249 [2024-05-15 10:43:58.109925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.249 [2024-05-15 10:43:58.109976] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:42.249 [2024-05-15 10:43:58.110021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.249 [2024-05-15 10:43:58.110069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.249 [2024-05-15 10:43:58.110094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.249 [2024-05-15 10:43:58.110109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.249 [2024-05-15 10:43:58.110126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.249 [2024-05-15 10:43:58.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.249 [2024-05-15 10:43:58.110157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.249 [2024-05-15 10:43:58.110171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.249 [2024-05-15 10:43:58.110187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.249 [2024-05-15 10:43:58.110201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.249 [2024-05-15 10:43:58.110216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:42.249 [2024-05-15 10:43:58.110297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:42.249 [2024-05-15 10:43:58.111295] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:42.249 [2024-05-15 10:43:58.111313] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:42.249 10:43:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.507 10:43:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.507 10:43:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:43.446 10:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.379 [2024-05-15 10:44:00.163863] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.379 [2024-05-15 10:44:00.163908] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.379 [2024-05-15 10:44:00.163928] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.637 [2024-05-15 10:44:00.294016] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:44.637 10:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.637 [2024-05-15 10:44:00.476785] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:44.637 [2024-05-15 10:44:00.476836] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:44.637 [2024-05-15 10:44:00.476870] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:44.637 [2024-05-15 10:44:00.476889] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:44.637 [2024-05-15 10:44:00.476902] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.638 [2024-05-15 10:44:00.481626] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a4380 was disconnected and freed. delete nvme_qpair. 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2806019 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2806019 ']' 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2806019 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2806019 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2806019' 00:24:45.573 killing process with pid 2806019 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2806019 00:24:45.573 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2806019 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.142 rmmod nvme_tcp 00:24:46.142 rmmod nvme_fabrics 00:24:46.142 rmmod nvme_keyring 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2805718 ']' 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2805718 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2805718 ']' 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2805718 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2805718 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2805718' 00:24:46.142 killing process with pid 2805718 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2805718 00:24:46.142 [2024-05-15 10:44:01.941582] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:46.142 10:44:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2805718 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.711 10:44:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.619 10:44:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.619 00:24:48.619 real 0m22.373s 00:24:48.619 user 0m27.754s 00:24:48.619 sys 0m5.249s 00:24:48.619 10:44:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:48.619 10:44:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.619 ************************************ 00:24:48.619 END TEST nvmf_discovery_remove_ifc 00:24:48.619 ************************************ 00:24:48.619 10:44:04 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:48.619 10:44:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:48.619 10:44:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:48.619 10:44:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.876 ************************************ 00:24:48.876 START TEST nvmf_identify_kernel_target 00:24:48.876 ************************************ 00:24:48.876 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:48.877 * Looking for test storage... 00:24:48.877 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.877 10:44:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:54.149 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:54.149 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:54.149 Found net devices under 0000:27:00.0: cvl_0_0 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:54.149 Found net devices under 0000:27:00.1: cvl_0_1 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:24:54.149 00:24:54.149 --- 10.0.0.2 ping statistics --- 00:24:54.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.149 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:24:54.149 00:24:54.149 --- 10.0.0.1 ping statistics --- 00:24:54.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.149 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.149 10:44:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:54.149 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:54.408 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:54.408 10:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:24:56.941 Waiting for block devices as requested 00:24:56.941 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:24:56.941 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:56.941 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:56.941 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:56.941 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.198 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:57.198 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.198 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:57.198 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.461 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:57.461 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.461 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.461 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.722 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:57.722 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.722 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:57.722 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:24:57.981 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:58.241 No valid GPT data, bailing 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:24:58.241 No valid GPT data, bailing 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:58.241 10:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:58.241 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:24:58.241 00:24:58.241 Discovery Log Number of Records 2, Generation counter 2 00:24:58.241 =====Discovery Log Entry 0====== 00:24:58.241 trtype: tcp 00:24:58.241 adrfam: ipv4 00:24:58.241 subtype: current discovery subsystem 00:24:58.241 treq: not specified, sq flow control disable supported 00:24:58.241 portid: 1 00:24:58.241 trsvcid: 4420 00:24:58.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:58.241 traddr: 10.0.0.1 00:24:58.241 eflags: none 00:24:58.241 sectype: none 00:24:58.241 =====Discovery Log Entry 1====== 00:24:58.241 trtype: tcp 00:24:58.241 adrfam: ipv4 00:24:58.241 subtype: nvme subsystem 00:24:58.241 treq: not specified, sq flow control disable supported 00:24:58.241 portid: 1 00:24:58.241 trsvcid: 4420 00:24:58.241 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:58.241 traddr: 10.0.0.1 00:24:58.241 eflags: none 00:24:58.241 sectype: none 00:24:58.241 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:58.241 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:58.241 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.241 ===================================================== 00:24:58.241 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:58.241 ===================================================== 00:24:58.241 Controller Capabilities/Features 00:24:58.241 ================================ 00:24:58.241 Vendor ID: 0000 00:24:58.241 Subsystem Vendor ID: 0000 00:24:58.241 Serial Number: eccc40e9c530379eb24d 00:24:58.241 Model Number: Linux 00:24:58.241 Firmware Version: 6.7.0-68 00:24:58.241 Recommended Arb Burst: 0 00:24:58.241 IEEE OUI Identifier: 00 00 00 00:24:58.241 Multi-path I/O 00:24:58.241 May have multiple subsystem ports: No 00:24:58.241 May have multiple controllers: No 00:24:58.241 Associated with SR-IOV VF: No 00:24:58.241 Max Data Transfer Size: Unlimited 00:24:58.241 Max Number of Namespaces: 0 00:24:58.241 Max Number of I/O Queues: 1024 00:24:58.241 NVMe Specification Version (VS): 1.3 00:24:58.241 NVMe Specification Version (Identify): 1.3 00:24:58.241 Maximum Queue Entries: 1024 00:24:58.241 Contiguous Queues Required: No 00:24:58.241 Arbitration Mechanisms Supported 00:24:58.241 Weighted Round Robin: Not Supported 00:24:58.241 Vendor Specific: Not Supported 00:24:58.241 Reset Timeout: 7500 ms 00:24:58.241 Doorbell Stride: 4 bytes 00:24:58.241 NVM Subsystem Reset: Not Supported 00:24:58.241 Command Sets Supported 00:24:58.241 NVM Command Set: Supported 00:24:58.241 Boot Partition: Not Supported 00:24:58.241 Memory Page Size Minimum: 4096 bytes 00:24:58.241 Memory Page Size Maximum: 4096 bytes 00:24:58.241 Persistent Memory Region: Not Supported 00:24:58.241 Optional Asynchronous Events Supported 00:24:58.241 Namespace Attribute Notices: Not Supported 00:24:58.241 Firmware Activation Notices: Not Supported 00:24:58.241 ANA Change Notices: Not Supported 00:24:58.241 PLE Aggregate Log Change Notices: Not Supported 00:24:58.241 LBA Status Info Alert Notices: Not Supported 00:24:58.241 EGE Aggregate Log Change Notices: Not Supported 00:24:58.241 Normal NVM Subsystem Shutdown event: Not Supported 00:24:58.241 Zone Descriptor Change Notices: Not Supported 00:24:58.241 Discovery Log Change Notices: Supported 00:24:58.241 Controller Attributes 00:24:58.241 128-bit Host Identifier: Not Supported 00:24:58.242 Non-Operational Permissive Mode: Not Supported 00:24:58.242 NVM Sets: Not Supported 00:24:58.242 Read Recovery Levels: Not Supported 00:24:58.242 Endurance Groups: Not Supported 00:24:58.242 Predictable Latency Mode: Not Supported 00:24:58.242 Traffic Based Keep ALive: Not Supported 00:24:58.242 Namespace Granularity: Not Supported 00:24:58.242 SQ Associations: Not Supported 00:24:58.242 UUID List: Not Supported 00:24:58.242 Multi-Domain Subsystem: Not Supported 00:24:58.242 Fixed Capacity Management: Not Supported 00:24:58.242 Variable Capacity Management: Not Supported 00:24:58.242 Delete Endurance Group: Not Supported 00:24:58.242 Delete NVM Set: Not Supported 00:24:58.242 Extended LBA Formats Supported: Not Supported 00:24:58.242 Flexible Data Placement Supported: Not Supported 00:24:58.242 00:24:58.242 Controller Memory Buffer Support 00:24:58.242 ================================ 00:24:58.242 Supported: No 00:24:58.242 00:24:58.242 Persistent Memory Region Support 00:24:58.242 ================================ 00:24:58.242 Supported: No 00:24:58.242 00:24:58.242 Admin Command Set Attributes 00:24:58.242 ============================ 00:24:58.242 Security Send/Receive: Not Supported 00:24:58.242 Format NVM: Not Supported 00:24:58.242 Firmware Activate/Download: Not Supported 00:24:58.242 Namespace Management: Not Supported 00:24:58.242 Device Self-Test: Not Supported 00:24:58.242 Directives: Not Supported 00:24:58.242 NVMe-MI: Not Supported 00:24:58.242 Virtualization Management: Not Supported 00:24:58.242 Doorbell Buffer Config: Not Supported 00:24:58.242 Get LBA Status Capability: Not Supported 00:24:58.242 Command & Feature Lockdown Capability: Not Supported 00:24:58.242 Abort Command Limit: 1 00:24:58.242 Async Event Request Limit: 1 00:24:58.242 Number of Firmware Slots: N/A 00:24:58.242 Firmware Slot 1 Read-Only: N/A 00:24:58.242 Firmware Activation Without Reset: N/A 00:24:58.242 Multiple Update Detection Support: N/A 00:24:58.242 Firmware Update Granularity: No Information Provided 00:24:58.242 Per-Namespace SMART Log: No 00:24:58.242 Asymmetric Namespace Access Log Page: Not Supported 00:24:58.242 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:58.242 Command Effects Log Page: Not Supported 00:24:58.242 Get Log Page Extended Data: Supported 00:24:58.242 Telemetry Log Pages: Not Supported 00:24:58.242 Persistent Event Log Pages: Not Supported 00:24:58.242 Supported Log Pages Log Page: May Support 00:24:58.242 Commands Supported & Effects Log Page: Not Supported 00:24:58.242 Feature Identifiers & Effects Log Page:May Support 00:24:58.242 NVMe-MI Commands & Effects Log Page: May Support 00:24:58.242 Data Area 4 for Telemetry Log: Not Supported 00:24:58.242 Error Log Page Entries Supported: 1 00:24:58.242 Keep Alive: Not Supported 00:24:58.242 00:24:58.242 NVM Command Set Attributes 00:24:58.242 ========================== 00:24:58.242 Submission Queue Entry Size 00:24:58.242 Max: 1 00:24:58.242 Min: 1 00:24:58.242 Completion Queue Entry Size 00:24:58.242 Max: 1 00:24:58.242 Min: 1 00:24:58.242 Number of Namespaces: 0 00:24:58.242 Compare Command: Not Supported 00:24:58.242 Write Uncorrectable Command: Not Supported 00:24:58.242 Dataset Management Command: Not Supported 00:24:58.242 Write Zeroes Command: Not Supported 00:24:58.242 Set Features Save Field: Not Supported 00:24:58.242 Reservations: Not Supported 00:24:58.242 Timestamp: Not Supported 00:24:58.242 Copy: Not Supported 00:24:58.242 Volatile Write Cache: Not Present 00:24:58.242 Atomic Write Unit (Normal): 1 00:24:58.242 Atomic Write Unit (PFail): 1 00:24:58.242 Atomic Compare & Write Unit: 1 00:24:58.242 Fused Compare & Write: Not Supported 00:24:58.242 Scatter-Gather List 00:24:58.242 SGL Command Set: Supported 00:24:58.242 SGL Keyed: Not Supported 00:24:58.242 SGL Bit Bucket Descriptor: Not Supported 00:24:58.242 SGL Metadata Pointer: Not Supported 00:24:58.242 Oversized SGL: Not Supported 00:24:58.242 SGL Metadata Address: Not Supported 00:24:58.242 SGL Offset: Supported 00:24:58.242 Transport SGL Data Block: Not Supported 00:24:58.242 Replay Protected Memory Block: Not Supported 00:24:58.242 00:24:58.242 Firmware Slot Information 00:24:58.242 ========================= 00:24:58.242 Active slot: 0 00:24:58.242 00:24:58.242 00:24:58.242 Error Log 00:24:58.242 ========= 00:24:58.242 00:24:58.242 Active Namespaces 00:24:58.242 ================= 00:24:58.242 Discovery Log Page 00:24:58.242 ================== 00:24:58.242 Generation Counter: 2 00:24:58.242 Number of Records: 2 00:24:58.242 Record Format: 0 00:24:58.242 00:24:58.242 Discovery Log Entry 0 00:24:58.242 ---------------------- 00:24:58.242 Transport Type: 3 (TCP) 00:24:58.242 Address Family: 1 (IPv4) 00:24:58.242 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:58.242 Entry Flags: 00:24:58.242 Duplicate Returned Information: 0 00:24:58.242 Explicit Persistent Connection Support for Discovery: 0 00:24:58.242 Transport Requirements: 00:24:58.242 Secure Channel: Not Specified 00:24:58.242 Port ID: 1 (0x0001) 00:24:58.242 Controller ID: 65535 (0xffff) 00:24:58.242 Admin Max SQ Size: 32 00:24:58.242 Transport Service Identifier: 4420 00:24:58.242 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:58.242 Transport Address: 10.0.0.1 00:24:58.242 Discovery Log Entry 1 00:24:58.242 ---------------------- 00:24:58.242 Transport Type: 3 (TCP) 00:24:58.242 Address Family: 1 (IPv4) 00:24:58.242 Subsystem Type: 2 (NVM Subsystem) 00:24:58.242 Entry Flags: 00:24:58.242 Duplicate Returned Information: 0 00:24:58.242 Explicit Persistent Connection Support for Discovery: 0 00:24:58.242 Transport Requirements: 00:24:58.242 Secure Channel: Not Specified 00:24:58.242 Port ID: 1 (0x0001) 00:24:58.242 Controller ID: 65535 (0xffff) 00:24:58.242 Admin Max SQ Size: 32 00:24:58.242 Transport Service Identifier: 4420 00:24:58.242 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:58.242 Transport Address: 10.0.0.1 00:24:58.503 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:58.503 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.503 get_feature(0x01) failed 00:24:58.503 get_feature(0x02) failed 00:24:58.503 get_feature(0x04) failed 00:24:58.503 ===================================================== 00:24:58.503 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:58.503 ===================================================== 00:24:58.503 Controller Capabilities/Features 00:24:58.503 ================================ 00:24:58.503 Vendor ID: 0000 00:24:58.503 Subsystem Vendor ID: 0000 00:24:58.503 Serial Number: b34d21f9bb562c5aa757 00:24:58.503 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:58.503 Firmware Version: 6.7.0-68 00:24:58.503 Recommended Arb Burst: 6 00:24:58.503 IEEE OUI Identifier: 00 00 00 00:24:58.503 Multi-path I/O 00:24:58.503 May have multiple subsystem ports: Yes 00:24:58.503 May have multiple controllers: Yes 00:24:58.504 Associated with SR-IOV VF: No 00:24:58.504 Max Data Transfer Size: Unlimited 00:24:58.504 Max Number of Namespaces: 1024 00:24:58.504 Max Number of I/O Queues: 128 00:24:58.504 NVMe Specification Version (VS): 1.3 00:24:58.504 NVMe Specification Version (Identify): 1.3 00:24:58.504 Maximum Queue Entries: 1024 00:24:58.504 Contiguous Queues Required: No 00:24:58.504 Arbitration Mechanisms Supported 00:24:58.504 Weighted Round Robin: Not Supported 00:24:58.504 Vendor Specific: Not Supported 00:24:58.504 Reset Timeout: 7500 ms 00:24:58.504 Doorbell Stride: 4 bytes 00:24:58.504 NVM Subsystem Reset: Not Supported 00:24:58.504 Command Sets Supported 00:24:58.504 NVM Command Set: Supported 00:24:58.504 Boot Partition: Not Supported 00:24:58.504 Memory Page Size Minimum: 4096 bytes 00:24:58.504 Memory Page Size Maximum: 4096 bytes 00:24:58.504 Persistent Memory Region: Not Supported 00:24:58.504 Optional Asynchronous Events Supported 00:24:58.504 Namespace Attribute Notices: Supported 00:24:58.504 Firmware Activation Notices: Not Supported 00:24:58.504 ANA Change Notices: Supported 00:24:58.504 PLE Aggregate Log Change Notices: Not Supported 00:24:58.504 LBA Status Info Alert Notices: Not Supported 00:24:58.504 EGE Aggregate Log Change Notices: Not Supported 00:24:58.504 Normal NVM Subsystem Shutdown event: Not Supported 00:24:58.504 Zone Descriptor Change Notices: Not Supported 00:24:58.504 Discovery Log Change Notices: Not Supported 00:24:58.504 Controller Attributes 00:24:58.504 128-bit Host Identifier: Supported 00:24:58.504 Non-Operational Permissive Mode: Not Supported 00:24:58.504 NVM Sets: Not Supported 00:24:58.504 Read Recovery Levels: Not Supported 00:24:58.504 Endurance Groups: Not Supported 00:24:58.504 Predictable Latency Mode: Not Supported 00:24:58.504 Traffic Based Keep ALive: Supported 00:24:58.504 Namespace Granularity: Not Supported 00:24:58.504 SQ Associations: Not Supported 00:24:58.504 UUID List: Not Supported 00:24:58.504 Multi-Domain Subsystem: Not Supported 00:24:58.504 Fixed Capacity Management: Not Supported 00:24:58.504 Variable Capacity Management: Not Supported 00:24:58.504 Delete Endurance Group: Not Supported 00:24:58.504 Delete NVM Set: Not Supported 00:24:58.504 Extended LBA Formats Supported: Not Supported 00:24:58.504 Flexible Data Placement Supported: Not Supported 00:24:58.504 00:24:58.504 Controller Memory Buffer Support 00:24:58.504 ================================ 00:24:58.504 Supported: No 00:24:58.504 00:24:58.504 Persistent Memory Region Support 00:24:58.504 ================================ 00:24:58.504 Supported: No 00:24:58.504 00:24:58.504 Admin Command Set Attributes 00:24:58.504 ============================ 00:24:58.504 Security Send/Receive: Not Supported 00:24:58.504 Format NVM: Not Supported 00:24:58.504 Firmware Activate/Download: Not Supported 00:24:58.504 Namespace Management: Not Supported 00:24:58.504 Device Self-Test: Not Supported 00:24:58.504 Directives: Not Supported 00:24:58.504 NVMe-MI: Not Supported 00:24:58.504 Virtualization Management: Not Supported 00:24:58.504 Doorbell Buffer Config: Not Supported 00:24:58.504 Get LBA Status Capability: Not Supported 00:24:58.504 Command & Feature Lockdown Capability: Not Supported 00:24:58.504 Abort Command Limit: 4 00:24:58.504 Async Event Request Limit: 4 00:24:58.504 Number of Firmware Slots: N/A 00:24:58.504 Firmware Slot 1 Read-Only: N/A 00:24:58.504 Firmware Activation Without Reset: N/A 00:24:58.504 Multiple Update Detection Support: N/A 00:24:58.504 Firmware Update Granularity: No Information Provided 00:24:58.504 Per-Namespace SMART Log: Yes 00:24:58.504 Asymmetric Namespace Access Log Page: Supported 00:24:58.504 ANA Transition Time : 10 sec 00:24:58.504 00:24:58.504 Asymmetric Namespace Access Capabilities 00:24:58.504 ANA Optimized State : Supported 00:24:58.504 ANA Non-Optimized State : Supported 00:24:58.504 ANA Inaccessible State : Supported 00:24:58.504 ANA Persistent Loss State : Supported 00:24:58.504 ANA Change State : Supported 00:24:58.504 ANAGRPID is not changed : No 00:24:58.504 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:58.504 00:24:58.504 ANA Group Identifier Maximum : 128 00:24:58.504 Number of ANA Group Identifiers : 128 00:24:58.504 Max Number of Allowed Namespaces : 1024 00:24:58.504 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:58.504 Command Effects Log Page: Supported 00:24:58.504 Get Log Page Extended Data: Supported 00:24:58.504 Telemetry Log Pages: Not Supported 00:24:58.504 Persistent Event Log Pages: Not Supported 00:24:58.504 Supported Log Pages Log Page: May Support 00:24:58.504 Commands Supported & Effects Log Page: Not Supported 00:24:58.504 Feature Identifiers & Effects Log Page:May Support 00:24:58.504 NVMe-MI Commands & Effects Log Page: May Support 00:24:58.504 Data Area 4 for Telemetry Log: Not Supported 00:24:58.504 Error Log Page Entries Supported: 128 00:24:58.504 Keep Alive: Supported 00:24:58.504 Keep Alive Granularity: 1000 ms 00:24:58.504 00:24:58.504 NVM Command Set Attributes 00:24:58.504 ========================== 00:24:58.504 Submission Queue Entry Size 00:24:58.504 Max: 64 00:24:58.504 Min: 64 00:24:58.504 Completion Queue Entry Size 00:24:58.504 Max: 16 00:24:58.504 Min: 16 00:24:58.504 Number of Namespaces: 1024 00:24:58.504 Compare Command: Not Supported 00:24:58.504 Write Uncorrectable Command: Not Supported 00:24:58.504 Dataset Management Command: Supported 00:24:58.504 Write Zeroes Command: Supported 00:24:58.504 Set Features Save Field: Not Supported 00:24:58.504 Reservations: Not Supported 00:24:58.504 Timestamp: Not Supported 00:24:58.504 Copy: Not Supported 00:24:58.504 Volatile Write Cache: Present 00:24:58.504 Atomic Write Unit (Normal): 1 00:24:58.504 Atomic Write Unit (PFail): 1 00:24:58.504 Atomic Compare & Write Unit: 1 00:24:58.504 Fused Compare & Write: Not Supported 00:24:58.504 Scatter-Gather List 00:24:58.504 SGL Command Set: Supported 00:24:58.504 SGL Keyed: Not Supported 00:24:58.504 SGL Bit Bucket Descriptor: Not Supported 00:24:58.504 SGL Metadata Pointer: Not Supported 00:24:58.504 Oversized SGL: Not Supported 00:24:58.504 SGL Metadata Address: Not Supported 00:24:58.504 SGL Offset: Supported 00:24:58.504 Transport SGL Data Block: Not Supported 00:24:58.504 Replay Protected Memory Block: Not Supported 00:24:58.504 00:24:58.504 Firmware Slot Information 00:24:58.504 ========================= 00:24:58.504 Active slot: 0 00:24:58.504 00:24:58.504 Asymmetric Namespace Access 00:24:58.504 =========================== 00:24:58.504 Change Count : 0 00:24:58.504 Number of ANA Group Descriptors : 1 00:24:58.504 ANA Group Descriptor : 0 00:24:58.504 ANA Group ID : 1 00:24:58.504 Number of NSID Values : 1 00:24:58.504 Change Count : 0 00:24:58.504 ANA State : 1 00:24:58.504 Namespace Identifier : 1 00:24:58.504 00:24:58.504 Commands Supported and Effects 00:24:58.504 ============================== 00:24:58.504 Admin Commands 00:24:58.504 -------------- 00:24:58.504 Get Log Page (02h): Supported 00:24:58.504 Identify (06h): Supported 00:24:58.504 Abort (08h): Supported 00:24:58.504 Set Features (09h): Supported 00:24:58.504 Get Features (0Ah): Supported 00:24:58.504 Asynchronous Event Request (0Ch): Supported 00:24:58.504 Keep Alive (18h): Supported 00:24:58.504 I/O Commands 00:24:58.504 ------------ 00:24:58.504 Flush (00h): Supported 00:24:58.504 Write (01h): Supported LBA-Change 00:24:58.504 Read (02h): Supported 00:24:58.504 Write Zeroes (08h): Supported LBA-Change 00:24:58.504 Dataset Management (09h): Supported 00:24:58.504 00:24:58.504 Error Log 00:24:58.504 ========= 00:24:58.504 Entry: 0 00:24:58.504 Error Count: 0x3 00:24:58.504 Submission Queue Id: 0x0 00:24:58.504 Command Id: 0x5 00:24:58.504 Phase Bit: 0 00:24:58.504 Status Code: 0x2 00:24:58.504 Status Code Type: 0x0 00:24:58.504 Do Not Retry: 1 00:24:58.504 Error Location: 0x28 00:24:58.504 LBA: 0x0 00:24:58.504 Namespace: 0x0 00:24:58.504 Vendor Log Page: 0x0 00:24:58.504 ----------- 00:24:58.504 Entry: 1 00:24:58.504 Error Count: 0x2 00:24:58.504 Submission Queue Id: 0x0 00:24:58.504 Command Id: 0x5 00:24:58.504 Phase Bit: 0 00:24:58.504 Status Code: 0x2 00:24:58.504 Status Code Type: 0x0 00:24:58.504 Do Not Retry: 1 00:24:58.504 Error Location: 0x28 00:24:58.504 LBA: 0x0 00:24:58.504 Namespace: 0x0 00:24:58.504 Vendor Log Page: 0x0 00:24:58.504 ----------- 00:24:58.504 Entry: 2 00:24:58.504 Error Count: 0x1 00:24:58.504 Submission Queue Id: 0x0 00:24:58.504 Command Id: 0x4 00:24:58.504 Phase Bit: 0 00:24:58.504 Status Code: 0x2 00:24:58.504 Status Code Type: 0x0 00:24:58.504 Do Not Retry: 1 00:24:58.504 Error Location: 0x28 00:24:58.504 LBA: 0x0 00:24:58.504 Namespace: 0x0 00:24:58.504 Vendor Log Page: 0x0 00:24:58.504 00:24:58.504 Number of Queues 00:24:58.504 ================ 00:24:58.504 Number of I/O Submission Queues: 128 00:24:58.504 Number of I/O Completion Queues: 128 00:24:58.504 00:24:58.504 ZNS Specific Controller Data 00:24:58.505 ============================ 00:24:58.505 Zone Append Size Limit: 0 00:24:58.505 00:24:58.505 00:24:58.505 Active Namespaces 00:24:58.505 ================= 00:24:58.505 get_feature(0x05) failed 00:24:58.505 Namespace ID:1 00:24:58.505 Command Set Identifier: NVM (00h) 00:24:58.505 Deallocate: Supported 00:24:58.505 Deallocated/Unwritten Error: Not Supported 00:24:58.505 Deallocated Read Value: Unknown 00:24:58.505 Deallocate in Write Zeroes: Not Supported 00:24:58.505 Deallocated Guard Field: 0xFFFF 00:24:58.505 Flush: Supported 00:24:58.505 Reservation: Not Supported 00:24:58.505 Namespace Sharing Capabilities: Multiple Controllers 00:24:58.505 Size (in LBAs): 1875385008 (894GiB) 00:24:58.505 Capacity (in LBAs): 1875385008 (894GiB) 00:24:58.505 Utilization (in LBAs): 1875385008 (894GiB) 00:24:58.505 UUID: 3aa56b1e-884c-4829-9ecc-79a3aac32212 00:24:58.505 Thin Provisioning: Not Supported 00:24:58.505 Per-NS Atomic Units: Yes 00:24:58.505 Atomic Write Unit (Normal): 8 00:24:58.505 Atomic Write Unit (PFail): 8 00:24:58.505 Preferred Write Granularity: 8 00:24:58.505 Atomic Compare & Write Unit: 8 00:24:58.505 Atomic Boundary Size (Normal): 0 00:24:58.505 Atomic Boundary Size (PFail): 0 00:24:58.505 Atomic Boundary Offset: 0 00:24:58.505 NGUID/EUI64 Never Reused: No 00:24:58.505 ANA group ID: 1 00:24:58.505 Namespace Write Protected: No 00:24:58.505 Number of LBA Formats: 1 00:24:58.505 Current LBA Format: LBA Format #00 00:24:58.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:58.505 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.505 rmmod nvme_tcp 00:24:58.505 rmmod nvme_fabrics 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.505 10:44:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.449 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:00.449 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:00.449 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:00.449 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:00.707 10:44:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:25:03.236 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.236 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.236 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.236 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.236 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.236 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.236 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.236 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.236 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.236 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.236 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.236 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.236 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.496 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:25:03.496 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:03.496 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:25:04.066 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:25:04.066 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:25:04.324 00:25:04.324 real 0m15.633s 00:25:04.324 user 0m3.458s 00:25:04.324 sys 0m7.683s 00:25:04.324 10:44:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:04.324 10:44:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.324 ************************************ 00:25:04.324 END TEST nvmf_identify_kernel_target 00:25:04.324 ************************************ 00:25:04.324 10:44:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:04.324 10:44:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:04.324 10:44:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:04.324 10:44:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.324 ************************************ 00:25:04.324 START TEST nvmf_auth_host 00:25:04.324 ************************************ 00:25:04.324 10:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:04.582 * Looking for test storage... 00:25:04.582 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:04.582 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.583 10:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:11.151 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:11.151 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:11.151 Found net devices under 0000:27:00.0: cvl_0_0 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:11.151 Found net devices under 0000:27:00.1: cvl_0_1 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.151 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.152 10:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:11.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:25:11.152 00:25:11.152 --- 10.0.0.2 ping statistics --- 00:25:11.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.152 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:25:11.152 00:25:11.152 --- 10.0.0.1 ping statistics --- 00:25:11.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.152 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2819909 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2819909 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2819909 ']' 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.152 10:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc453f6a0270882afaffd5a0b33ed129 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6GZ 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc453f6a0270882afaffd5a0b33ed129 0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc453f6a0270882afaffd5a0b33ed129 0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc453f6a0270882afaffd5a0b33ed129 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6GZ 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6GZ 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.6GZ 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eaed031e4e1ebe44fd0d3c48769341a2f61d3632b645cff442963499e59ca607 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.u3k 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eaed031e4e1ebe44fd0d3c48769341a2f61d3632b645cff442963499e59ca607 3 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eaed031e4e1ebe44fd0d3c48769341a2f61d3632b645cff442963499e59ca607 3 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eaed031e4e1ebe44fd0d3c48769341a2f61d3632b645cff442963499e59ca607 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.u3k 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.u3k 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.u3k 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=67aecd5cbc639d196dd3e83e111a0cde12e77c647df5a99d 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hqh 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 67aecd5cbc639d196dd3e83e111a0cde12e77c647df5a99d 0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 67aecd5cbc639d196dd3e83e111a0cde12e77c647df5a99d 0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=67aecd5cbc639d196dd3e83e111a0cde12e77c647df5a99d 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hqh 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hqh 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Hqh 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10583b8e3cd0b4b1601e917b7d377a5c83045ea4be6eb9c7 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CuU 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10583b8e3cd0b4b1601e917b7d377a5c83045ea4be6eb9c7 2 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10583b8e3cd0b4b1601e917b7d377a5c83045ea4be6eb9c7 2 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10583b8e3cd0b4b1601e917b7d377a5c83045ea4be6eb9c7 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CuU 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CuU 00:25:11.413 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CuU 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb7b58b5aa95d677ca390b4e6e5521d9 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HcV 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb7b58b5aa95d677ca390b4e6e5521d9 1 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb7b58b5aa95d677ca390b4e6e5521d9 1 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb7b58b5aa95d677ca390b4e6e5521d9 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:11.414 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HcV 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HcV 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HcV 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=815556115f97a1764a0a8a044b32910d 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bAJ 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 815556115f97a1764a0a8a044b32910d 1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 815556115f97a1764a0a8a044b32910d 1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=815556115f97a1764a0a8a044b32910d 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bAJ 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bAJ 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bAJ 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f4bb53b0ddb9ea1960702d0a7fdc3f570593255d44f14ace 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.taO 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f4bb53b0ddb9ea1960702d0a7fdc3f570593255d44f14ace 2 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f4bb53b0ddb9ea1960702d0a7fdc3f570593255d44f14ace 2 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f4bb53b0ddb9ea1960702d0a7fdc3f570593255d44f14ace 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.taO 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.taO 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.taO 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6a47cf034155a5b567fadcadad46a112 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JAR 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6a47cf034155a5b567fadcadad46a112 0 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6a47cf034155a5b567fadcadad46a112 0 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6a47cf034155a5b567fadcadad46a112 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JAR 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JAR 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JAR 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8fbe1d51181dbbff5592b1955f27f51524e76d918170dfeb8cebbc9691457b7d 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.unc 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8fbe1d51181dbbff5592b1955f27f51524e76d918170dfeb8cebbc9691457b7d 3 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8fbe1d51181dbbff5592b1955f27f51524e76d918170dfeb8cebbc9691457b7d 3 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8fbe1d51181dbbff5592b1955f27f51524e76d918170dfeb8cebbc9691457b7d 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.unc 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.unc 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.unc 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2819909 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2819909 ']' 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:11.675 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6GZ 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.u3k ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u3k 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Hqh 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CuU ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CuU 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HcV 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bAJ ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bAJ 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.taO 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JAR ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JAR 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.unc 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:11.935 10:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:25:14.469 Waiting for block devices as requested 00:25:14.469 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:25:14.729 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:14.729 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:14.729 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:14.988 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:25:14.988 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:14.988 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:25:14.988 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:15.249 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:25:15.249 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:15.249 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:25:15.249 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:25:15.508 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:25:15.508 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:15.508 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:25:15.508 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:15.768 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:25:15.768 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:16.339 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:16.598 No valid GPT data, bailing 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:25:16.599 No valid GPT data, bailing 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:25:16.599 00:25:16.599 Discovery Log Number of Records 2, Generation counter 2 00:25:16.599 =====Discovery Log Entry 0====== 00:25:16.599 trtype: tcp 00:25:16.599 adrfam: ipv4 00:25:16.599 subtype: current discovery subsystem 00:25:16.599 treq: not specified, sq flow control disable supported 00:25:16.599 portid: 1 00:25:16.599 trsvcid: 4420 00:25:16.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:16.599 traddr: 10.0.0.1 00:25:16.599 eflags: none 00:25:16.599 sectype: none 00:25:16.599 =====Discovery Log Entry 1====== 00:25:16.599 trtype: tcp 00:25:16.599 adrfam: ipv4 00:25:16.599 subtype: nvme subsystem 00:25:16.599 treq: not specified, sq flow control disable supported 00:25:16.599 portid: 1 00:25:16.599 trsvcid: 4420 00:25:16.599 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:16.599 traddr: 10.0.0.1 00:25:16.599 eflags: none 00:25:16.599 sectype: none 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 nvme0n1 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:16.858 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.859 nvme0n1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.859 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.118 nvme0n1 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.118 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.119 10:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.119 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.119 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.119 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.380 nvme0n1 00:25:17.381 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.381 10:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.381 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 nvme0n1 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.381 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.642 nvme0n1 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.642 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.643 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.643 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.643 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.643 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.643 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.643 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.902 nvme0n1 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.902 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.903 nvme0n1 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.903 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.161 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.162 nvme0n1 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.162 10:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.162 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.472 nvme0n1 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.472 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.756 nvme0n1 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.756 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.757 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.016 nvme0n1 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.016 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.017 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.277 nvme0n1 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.277 10:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.538 nvme0n1 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.538 nvme0n1 00:25:19.538 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.799 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.800 nvme0n1 00:25:19.800 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.058 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.059 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.059 10:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.059 10:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.059 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.059 10:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.316 nvme0n1 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.316 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.317 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.884 nvme0n1 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.884 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.143 nvme0n1 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.143 10:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.402 nvme0n1 00:25:21.402 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.402 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.402 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.402 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.402 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.663 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.922 nvme0n1 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.922 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.923 10:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 nvme0n1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.489 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.057 nvme0n1 00:25:23.057 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.057 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.057 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.057 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.057 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.057 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.317 10:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.887 nvme0n1 00:25:23.887 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.888 10:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.455 nvme0n1 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.455 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 nvme0n1 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.037 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.298 10:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.298 nvme0n1 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:25.298 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.299 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 nvme0n1 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 nvme0n1 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.559 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.819 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.820 nvme0n1 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.820 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.079 nvme0n1 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.079 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.080 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.338 nvme0n1 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.338 10:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.338 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.339 nvme0n1 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.339 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.597 nvme0n1 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:26.597 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.598 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.856 nvme0n1 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.856 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.857 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.117 nvme0n1 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.117 10:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.378 nvme0n1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.378 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 nvme0n1 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.638 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 nvme0n1 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.899 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.158 nvme0n1 00:25:28.158 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.159 10:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.417 nvme0n1 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.417 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.676 nvme0n1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.676 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 nvme0n1 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.243 10:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.503 nvme0n1 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.503 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.504 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.071 nvme0n1 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.071 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.072 10:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.330 nvme0n1 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.330 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.331 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.897 nvme0n1 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.897 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.898 10:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.466 nvme0n1 00:25:31.466 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.466 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.466 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.466 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.466 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.726 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.346 nvme0n1 00:25:32.346 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.346 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.346 10:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.346 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.346 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.346 10:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.346 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.949 nvme0n1 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.949 10:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.521 nvme0n1 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.521 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.522 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.522 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.522 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.522 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.783 nvme0n1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.783 nvme0n1 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.783 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.043 nvme0n1 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.043 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.044 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.302 nvme0n1 00:25:34.302 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.302 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.302 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.302 10:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.302 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.302 10:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.302 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.303 nvme0n1 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.303 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.561 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.562 nvme0n1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.562 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.820 nvme0n1 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.820 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.079 nvme0n1 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.079 nvme0n1 00:25:35.079 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.339 10:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.339 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.340 nvme0n1 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.340 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.600 nvme0n1 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.600 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.601 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.860 nvme0n1 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.860 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.861 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.121 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.122 nvme0n1 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.122 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.383 10:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 nvme0n1 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.383 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.644 nvme0n1 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.644 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.645 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.214 nvme0n1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.214 10:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.473 nvme0n1 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.473 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.039 nvme0n1 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.039 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.040 10:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.300 nvme0n1 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.300 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.559 nvme0n1 00:25:38.559 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.559 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.559 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.559 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.559 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.559 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0NTNmNmEwMjcwODgyYWZhZmZkNWEwYjMzZWQxMjnEGHHF: 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFlZDAzMWU0ZTFlYmU0NGZkMGQzYzQ4NzY5MzQxYTJmNjFkMzYzMmI2NDVjZmY0NDI5NjM0OTllNTljYTYwN8QSz9c=: 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.817 10:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.383 nvme0n1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.383 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 nvme0n1 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI3YjU4YjVhYTk1ZDY3N2NhMzkwYjRlNmU1NTIxZDmc1EFr: 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE1NTU2MTE1Zjk3YTE3NjRhMGE4YTA0NGIzMjkxMGRd+F2j: 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.954 10:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.523 nvme0n1 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYjUzYjBkZGI5ZWExOTYwNzAyZDBhN2ZkYzNmNTcwNTkzMjU1ZDQ0ZjE0YWNlq8Pj+g==: 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE0N2NmMDM0MTU1YTViNTY3ZmFkY2FkYWQ0NmExMTJowbFg: 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.523 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.090 nvme0n1 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGZiZTFkNTExODFkYmJmZjU1OTJiMTk1NWYyN2Y1MTUyNGU3NmQ5MTgxNzBkZmViOGNlYmJjOTY5MTQ1N2I3ZDMhw4M=: 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.090 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.091 10:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.658 nvme0n1 00:25:41.658 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.658 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.658 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.658 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.658 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.658 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdhZWNkNWNiYzYzOWQxOTZkZDNlODNlMTExYTBjZGUxMmU3N2M2NDdkZjVhOTlkhRXNPw==: 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: ]] 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTA1ODNiOGUzY2QwYjRiMTYwMWU5MTdiN2QzNzdhNWM4MzA0NWVhNGJlNmViOWM3i9SDUA==: 00:25:41.919 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.920 request: 00:25:41.920 { 00:25:41.920 "name": "nvme0", 00:25:41.920 "trtype": "tcp", 00:25:41.920 "traddr": "10.0.0.1", 00:25:41.920 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:41.920 "adrfam": "ipv4", 00:25:41.920 "trsvcid": "4420", 00:25:41.920 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:41.920 "method": "bdev_nvme_attach_controller", 00:25:41.920 "req_id": 1 00:25:41.920 } 00:25:41.920 Got JSON-RPC error response 00:25:41.920 response: 00:25:41.920 { 00:25:41.920 "code": -32602, 00:25:41.920 "message": "Invalid parameters" 00:25:41.920 } 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.920 request: 00:25:41.920 { 00:25:41.920 "name": "nvme0", 00:25:41.920 "trtype": "tcp", 00:25:41.920 "traddr": "10.0.0.1", 00:25:41.920 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:41.920 "adrfam": "ipv4", 00:25:41.920 "trsvcid": "4420", 00:25:41.920 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:41.920 "dhchap_key": "key2", 00:25:41.920 "method": "bdev_nvme_attach_controller", 00:25:41.920 "req_id": 1 00:25:41.920 } 00:25:41.920 Got JSON-RPC error response 00:25:41.920 response: 00:25:41.920 { 00:25:41.920 "code": -32602, 00:25:41.920 "message": "Invalid parameters" 00:25:41.920 } 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.920 request: 00:25:41.920 { 00:25:41.920 "name": "nvme0", 00:25:41.920 "trtype": "tcp", 00:25:41.920 "traddr": "10.0.0.1", 00:25:41.920 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:41.920 "adrfam": "ipv4", 00:25:41.920 "trsvcid": "4420", 00:25:41.920 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:41.920 "dhchap_key": "key1", 00:25:41.920 "dhchap_ctrlr_key": "ckey2", 00:25:41.920 "method": "bdev_nvme_attach_controller", 00:25:41.920 "req_id": 1 00:25:41.920 } 00:25:41.920 Got JSON-RPC error response 00:25:41.920 response: 00:25:41.920 { 00:25:41.920 "code": -32602, 00:25:41.920 "message": "Invalid parameters" 00:25:41.920 } 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:41.920 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.921 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:41.921 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:41.921 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:41.921 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.921 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:41.921 rmmod nvme_tcp 00:25:42.180 rmmod nvme_fabrics 00:25:42.180 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.180 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:42.180 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:42.180 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2819909 ']' 00:25:42.180 10:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2819909 00:25:42.180 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 2819909 ']' 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 2819909 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2819909 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2819909' 00:25:42.181 killing process with pid 2819909 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 2819909 00:25:42.181 10:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 2819909 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.440 10:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:44.974 10:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:25:47.514 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.514 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.514 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.514 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.514 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.514 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.514 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.514 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.514 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.773 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.773 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.773 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.773 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.773 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:25:47.773 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:47.773 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:25:48.346 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:25:48.662 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:25:48.921 10:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6GZ /tmp/spdk.key-null.Hqh /tmp/spdk.key-sha256.HcV /tmp/spdk.key-sha384.taO /tmp/spdk.key-sha512.unc /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log 00:25:48.921 10:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:25:51.458 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:25:51.458 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:51.458 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:51.458 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:25:51.717 00:25:51.717 real 0m47.240s 00:25:51.717 user 0m39.559s 00:25:51.717 sys 0m12.131s 00:25:51.717 10:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:51.717 10:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.717 ************************************ 00:25:51.717 END TEST nvmf_auth_host 00:25:51.717 ************************************ 00:25:51.717 10:45:07 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:25:51.717 10:45:07 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:51.717 10:45:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:51.717 10:45:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:51.717 10:45:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.717 ************************************ 00:25:51.717 START TEST nvmf_digest 00:25:51.717 ************************************ 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:51.717 * Looking for test storage... 00:25:51.717 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.717 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.977 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.978 10:45:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:57.256 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:57.256 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:57.256 Found net devices under 0000:27:00.0: cvl_0_0 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:57.256 Found net devices under 0000:27:00.1: cvl_0_1 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:57.256 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.257 10:45:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:57.257 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:57.257 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.257 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:57.513 00:25:57.513 --- 10.0.0.2 ping statistics --- 00:25:57.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.513 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:25:57.513 00:25:57.513 --- 10.0.0.1 ping statistics --- 00:25:57.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.513 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 1 -eq 1 ]] 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- host/digest.sh@142 -- # run_test nvmf_digest_dsa_initiator run_digest dsa_initiator 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:57.513 ************************************ 00:25:57.513 START TEST nvmf_digest_dsa_initiator 00:25:57.513 ************************************ 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@1122 -- # run_digest dsa_initiator 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@120 -- # local dsa_initiator 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@121 -- # [[ dsa_initiator == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@121 -- # dsa_initiator=true 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:57.513 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@481 -- # nvmfpid=2835266 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@482 -- # waitforlisten 2835266 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2835266 ']' 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 10:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:57.772 [2024-05-15 10:45:13.394504] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:25:57.772 [2024-05-15 10:45:13.394604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.772 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.772 [2024-05-15 10:45:13.515477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.772 [2024-05-15 10:45:13.613227] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.772 [2024-05-15 10:45:13.613266] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.772 [2024-05-15 10:45:13.613275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.772 [2024-05-15 10:45:13.613285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.772 [2024-05-15 10:45:13.613293] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.772 [2024-05-15 10:45:13.613319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@125 -- # [[ dsa_initiator == \d\s\a\_\t\a\r\g\e\t ]] 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@126 -- # common_target_config 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@43 -- # rpc_cmd 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.341 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:25:58.601 null0 00:25:58.601 [2024-05-15 10:45:14.279266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.601 [2024-05-15 10:45:14.303198] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:58.601 [2024-05-15 10:45:14.303454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@128 -- # run_bperf randread 4096 128 true 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randread 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=4096 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=128 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2835572 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2835572 /var/tmp/bperf.sock 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2835572 ']' 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:58.601 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.602 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:58.602 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:25:58.602 10:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:58.602 [2024-05-15 10:45:14.382280] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:25:58.602 [2024-05-15 10:45:14.382392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835572 ] 00:25:58.602 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.859 [2024-05-15 10:45:14.498784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.859 [2024-05-15 10:45:14.590365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:25:59.424 [2024-05-15 10:45:15.206852] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:59.424 10:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:04.693 10:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.693 10:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.978 nvme0n1 00:26:04.978 10:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:04.978 10:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.978 Running I/O for 2 seconds... 00:26:07.510 00:26:07.510 Latency(us) 00:26:07.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:07.510 nvme0n1 : 2.04 23069.66 90.12 0.00 0.00 5434.67 2345.50 43046.80 00:26:07.510 =================================================================================================================== 00:26:07.510 Total : 23069.66 90.12 0.00 0.00 5434.67 2345.50 43046.80 00:26:07.510 0 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:07.510 | select(.opcode=="crc32c") 00:26:07.510 | "\(.module_name) \(.executed)"' 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2835572 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2835572 ']' 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2835572 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2835572 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2835572' 00:26:07.510 killing process with pid 2835572 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2835572 00:26:07.510 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.510 00:26:07.510 Latency(us) 00:26:07.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.510 =================================================================================================================== 00:26:07.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.510 10:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2835572 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@129 -- # run_bperf randread 131072 16 true 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randread 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=131072 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=16 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2837358 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2837358 /var/tmp/bperf.sock 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2837358 ']' 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:08.888 10:45:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:08.888 [2024-05-15 10:45:24.450492] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:08.888 [2024-05-15 10:45:24.450638] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837358 ] 00:26:08.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.888 Zero copy mechanism will not be used. 00:26:08.888 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.888 [2024-05-15 10:45:24.579592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.888 [2024-05-15 10:45:24.674024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:09.453 [2024-05-15 10:45:25.270596] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.453 10:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.775 10:45:30 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.775 10:45:30 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.032 nvme0n1 00:26:15.032 10:45:30 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.032 10:45:30 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.032 Zero copy mechanism will not be used. 00:26:15.032 Running I/O for 2 seconds... 00:26:17.563 00:26:17.563 Latency(us) 00:26:17.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.563 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:17.563 nvme0n1 : 2.00 6306.18 788.27 0.00 0.00 2534.10 655.36 5794.76 00:26:17.563 =================================================================================================================== 00:26:17.563 Total : 6306.18 788.27 0.00 0.00 2534.10 655.36 5794.76 00:26:17.563 0 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.563 | select(.opcode=="crc32c") 00:26:17.563 | "\(.module_name) \(.executed)"' 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2837358 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2837358 ']' 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2837358 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:17.563 10:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2837358 00:26:17.563 10:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:17.563 10:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:17.563 10:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2837358' 00:26:17.563 killing process with pid 2837358 00:26:17.563 10:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2837358 00:26:17.563 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.563 00:26:17.563 Latency(us) 00:26:17.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.563 =================================================================================================================== 00:26:17.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.563 10:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2837358 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 true 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randwrite 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=4096 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=128 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2839383 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2839383 /var/tmp/bperf.sock 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2839383 ']' 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:18.995 10:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:18.995 [2024-05-15 10:45:34.531952] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:18.995 [2024-05-15 10:45:34.532105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839383 ] 00:26:18.995 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.995 [2024-05-15 10:45:34.654319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.995 [2024-05-15 10:45:34.745465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:19.560 [2024-05-15 10:45:35.354006] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:19.560 10:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:24.830 10:45:40 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.830 10:45:40 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.088 nvme0n1 00:26:25.088 10:45:40 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:25.088 10:45:40 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.088 Running I/O for 2 seconds... 00:26:26.994 00:26:26.994 Latency(us) 00:26:26.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.994 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:26.994 nvme0n1 : 2.00 27414.76 107.09 0.00 0.00 4660.36 2104.05 6760.56 00:26:26.994 =================================================================================================================== 00:26:26.994 Total : 27414.76 107.09 0.00 0.00 4660.36 2104.05 6760.56 00:26:27.253 0 00:26:27.253 10:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:27.253 10:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:26:27.253 10:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:27.253 10:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.253 10:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:27.253 | select(.opcode=="crc32c") 00:26:27.253 | "\(.module_name) \(.executed)"' 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2839383 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2839383 ']' 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2839383 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2839383 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2839383' 00:26:27.253 killing process with pid 2839383 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2839383 00:26:27.253 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.253 00:26:27.253 Latency(us) 00:26:27.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.253 =================================================================================================================== 00:26:27.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.253 10:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2839383 00:26:28.628 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 true 00:26:28.628 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:28.628 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:28.628 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randwrite 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=131072 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=16 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2841205 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2841205 /var/tmp/bperf.sock 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2841205 ']' 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:28.629 10:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:28.889 [2024-05-15 10:45:44.567322] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:28.889 [2024-05-15 10:45:44.567450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841205 ] 00:26:28.889 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.889 Zero copy mechanism will not be used. 00:26:28.889 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.889 [2024-05-15 10:45:44.681006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.147 [2024-05-15 10:45:44.772092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.405 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:29.405 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:26:29.405 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:26:29.405 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:29.405 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:29.663 [2024-05-15 10:45:45.384656] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:29.663 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:29.663 10:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:34.970 10:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.970 10:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.228 nvme0n1 00:26:35.228 10:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:35.228 10:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.228 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.228 Zero copy mechanism will not be used. 00:26:35.228 Running I/O for 2 seconds... 00:26:37.174 00:26:37.174 Latency(us) 00:26:37.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.174 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:37.174 nvme0n1 : 2.00 7837.74 979.72 0.00 0.00 2037.32 1172.75 5760.27 00:26:37.174 =================================================================================================================== 00:26:37.174 Total : 7837.74 979.72 0.00 0.00 2037.32 1172.75 5760.27 00:26:37.174 0 00:26:37.174 10:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:37.174 10:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:26:37.174 10:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:37.174 | select(.opcode=="crc32c") 00:26:37.174 | "\(.module_name) \(.executed)"' 00:26:37.174 10:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:37.174 10:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2841205 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2841205 ']' 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2841205 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2841205 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2841205' 00:26:37.432 killing process with pid 2841205 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2841205 00:26:37.432 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.432 00:26:37.432 Latency(us) 00:26:37.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.432 =================================================================================================================== 00:26:37.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.432 10:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2841205 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@132 -- # killprocess 2835266 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2835266 ']' 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2835266 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2835266 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2835266' 00:26:38.808 killing process with pid 2835266 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2835266 00:26:38.808 [2024-05-15 10:45:54.650347] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:38.808 10:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2835266 00:26:39.379 00:26:39.379 real 0m41.793s 00:26:39.379 user 1m1.926s 00:26:39.379 sys 0m3.827s 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:39.379 ************************************ 00:26:39.379 END TEST nvmf_digest_dsa_initiator 00:26:39.379 ************************************ 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest -- host/digest.sh@143 -- # run_test nvmf_digest_dsa_target run_digest dsa_target 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:39.379 ************************************ 00:26:39.379 START TEST nvmf_digest_dsa_target 00:26:39.379 ************************************ 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@1122 -- # run_digest dsa_target 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@120 -- # local dsa_initiator 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@121 -- # [[ dsa_target == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@121 -- # dsa_initiator=false 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@481 -- # nvmfpid=2843308 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@482 -- # waitforlisten 2843308 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2843308 ']' 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:39.379 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.641 [2024-05-15 10:45:55.274707] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:39.641 [2024-05-15 10:45:55.274831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.641 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.641 [2024-05-15 10:45:55.411181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.641 [2024-05-15 10:45:55.509585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.641 [2024-05-15 10:45:55.509646] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.641 [2024-05-15 10:45:55.509656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.641 [2024-05-15 10:45:55.509666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.641 [2024-05-15 10:45:55.509674] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.641 [2024-05-15 10:45:55.509718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.212 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:40.212 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:26:40.212 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.212 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:40.212 10:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@125 -- # [[ dsa_target == \d\s\a\_\t\a\r\g\e\t ]] 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@125 -- # rpc_cmd dsa_scan_accel_module 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.212 [2024-05-15 10:45:56.038267] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@126 -- # common_target_config 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@43 -- # rpc_cmd 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.212 10:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.494 null0 00:26:45.494 [2024-05-15 10:46:01.146049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.494 [2024-05-15 10:46:01.172689] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:45.494 [2024-05-15 10:46:01.172958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randread 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=4096 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=128 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2844498 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2844498 /var/tmp/bperf.sock 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2844498 ']' 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.494 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:45.494 [2024-05-15 10:46:01.246628] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:45.494 [2024-05-15 10:46:01.246736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844498 ] 00:26:45.494 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.494 [2024-05-15 10:46:01.357075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.754 [2024-05-15 10:46:01.446276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.324 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:46.324 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:26:46.324 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:26:46.324 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.324 10:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:46.582 10:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.582 10:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.840 nvme0n1 00:26:46.840 10:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:46.840 10:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.840 Running I/O for 2 seconds... 00:26:49.375 00:26:49.375 Latency(us) 00:26:49.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.375 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.375 nvme0n1 : 2.00 22878.08 89.37 0.00 0.00 5588.05 2207.53 17384.29 00:26:49.375 =================================================================================================================== 00:26:49.375 Total : 22878.08 89.37 0.00 0.00 5588.05 2207.53 17384.29 00:26:49.375 0 00:26:49.375 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.375 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:26:49.375 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.375 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.375 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.375 | select(.opcode=="crc32c") 00:26:49.375 | "\(.module_name) \(.executed)"' 00:26:49.375 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2844498 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2844498 ']' 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2844498 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2844498 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2844498' 00:26:49.376 killing process with pid 2844498 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2844498 00:26:49.376 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.376 00:26:49.376 Latency(us) 00:26:49.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.376 =================================================================================================================== 00:26:49.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.376 10:46:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2844498 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randread 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=131072 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=16 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2845110 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2845110 /var/tmp/bperf.sock 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2845110 ']' 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:49.376 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.633 [2024-05-15 10:46:05.267628] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:49.633 [2024-05-15 10:46:05.267741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845110 ] 00:26:49.634 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.634 Zero copy mechanism will not be used. 00:26:49.634 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.634 [2024-05-15 10:46:05.379566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.634 [2024-05-15 10:46:05.469295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.204 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:50.204 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:26:50.204 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:26:50.204 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.204 10:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.464 10:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.464 10:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.723 nvme0n1 00:26:50.723 10:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.723 10:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.981 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.981 Zero copy mechanism will not be used. 00:26:50.981 Running I/O for 2 seconds... 00:26:52.884 00:26:52.884 Latency(us) 00:26:52.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.884 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:52.884 nvme0n1 : 2.00 7302.98 912.87 0.00 0.00 2188.09 528.17 8416.20 00:26:52.884 =================================================================================================================== 00:26:52.884 Total : 7302.98 912.87 0.00 0.00 2188.09 528.17 8416.20 00:26:52.884 0 00:26:52.884 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.884 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:26:52.884 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.884 | select(.opcode=="crc32c") 00:26:52.884 | "\(.module_name) \(.executed)"' 00:26:52.884 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.884 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2845110 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2845110 ']' 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2845110 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2845110 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2845110' 00:26:53.142 killing process with pid 2845110 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2845110 00:26:53.142 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.142 00:26:53.142 Latency(us) 00:26:53.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.142 =================================================================================================================== 00:26:53.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.142 10:46:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2845110 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randwrite 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=4096 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=128 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2845999 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2845999 /var/tmp/bperf.sock 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2845999 ']' 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.400 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:53.658 [2024-05-15 10:46:09.277551] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:53.658 [2024-05-15 10:46:09.277642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845999 ] 00:26:53.658 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.658 [2024-05-15 10:46:09.362164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.658 [2024-05-15 10:46:09.451210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.227 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:54.227 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:26:54.227 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:26:54.227 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:54.227 10:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:54.486 10:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.486 10:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.746 nvme0n1 00:26:54.746 10:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:54.746 10:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.005 Running I/O for 2 seconds... 00:26:56.937 00:26:56.937 Latency(us) 00:26:56.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.937 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:56.937 nvme0n1 : 2.00 25786.74 100.73 0.00 0.00 4954.12 2414.48 10209.82 00:26:56.937 =================================================================================================================== 00:26:56.937 Total : 25786.74 100.73 0.00 0.00 4954.12 2414.48 10209.82 00:26:56.937 0 00:26:56.937 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:56.937 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:26:56.937 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:56.937 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:56.937 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:56.937 | select(.opcode=="crc32c") 00:26:56.937 | "\(.module_name) \(.executed)"' 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2845999 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2845999 ']' 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2845999 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2845999 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2845999' 00:26:57.198 killing process with pid 2845999 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2845999 00:26:57.198 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.198 00:26:57.198 Latency(us) 00:26:57.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.198 =================================================================================================================== 00:26:57.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.198 10:46:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2845999 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randwrite 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=131072 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=16 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2846767 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2846767 /var/tmp/bperf.sock 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2846767 ']' 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.464 10:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:57.724 [2024-05-15 10:46:13.337182] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:26:57.724 [2024-05-15 10:46:13.337280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846767 ] 00:26:57.724 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.724 Zero copy mechanism will not be used. 00:26:57.724 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.724 [2024-05-15 10:46:13.425025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.724 [2024-05-15 10:46:13.514954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.292 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:58.292 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:26:58.292 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:26:58.292 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.292 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.551 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.551 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.810 nvme0n1 00:26:58.810 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:58.810 10:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.810 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.810 Zero copy mechanism will not be used. 00:26:58.810 Running I/O for 2 seconds... 00:27:01.340 00:27:01.340 Latency(us) 00:27:01.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.340 nvme0n1 : 2.00 7801.31 975.16 0.00 0.00 2045.91 1474.56 5139.40 00:27:01.340 =================================================================================================================== 00:27:01.340 Total : 7801.31 975.16 0.00 0.00 2045.91 1474.56 5139.40 00:27:01.340 0 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.340 | select(.opcode=="crc32c") 00:27:01.340 | "\(.module_name) \(.executed)"' 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2846767 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2846767 ']' 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2846767 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2846767 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2846767' 00:27:01.340 killing process with pid 2846767 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2846767 00:27:01.340 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.340 00:27:01.340 Latency(us) 00:27:01.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.340 =================================================================================================================== 00:27:01.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.340 10:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2846767 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@132 -- # killprocess 2843308 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2843308 ']' 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2843308 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2843308 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2843308' 00:27:01.599 killing process with pid 2843308 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2843308 00:27:01.599 [2024-05-15 10:46:17.274967] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:01.599 10:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2843308 00:27:02.977 00:27:02.977 real 0m23.609s 00:27:02.977 user 0m34.412s 00:27:02.977 sys 0m3.622s 00:27:02.977 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:02.977 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.977 ************************************ 00:27:02.977 END TEST nvmf_digest_dsa_target 00:27:02.977 ************************************ 00:27:02.977 10:46:18 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:02.977 10:46:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:02.977 10:46:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:02.977 10:46:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:03.238 ************************************ 00:27:03.238 START TEST nvmf_digest_error 00:27:03.238 ************************************ 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2847821 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2847821 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2847821 ']' 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.238 10:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:03.238 [2024-05-15 10:46:18.961515] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:03.238 [2024-05-15 10:46:18.961642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.238 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.238 [2024-05-15 10:46:19.091823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.497 [2024-05-15 10:46:19.189819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.497 [2024-05-15 10:46:19.189868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.497 [2024-05-15 10:46:19.189879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.497 [2024-05-15 10:46:19.189890] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.497 [2024-05-15 10:46:19.189898] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.497 [2024-05-15 10:46:19.189941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.064 [2024-05-15 10:46:19.690486] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.064 null0 00:27:04.064 [2024-05-15 10:46:19.845196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.064 [2024-05-15 10:46:19.869129] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:04.064 [2024-05-15 10:46:19.869380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2848131 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2848131 /var/tmp/bperf.sock 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2848131 ']' 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.064 10:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:04.064 [2024-05-15 10:46:19.920083] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:04.064 [2024-05-15 10:46:19.920158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848131 ] 00:27:04.323 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.323 [2024-05-15 10:46:20.005375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.323 [2024-05-15 10:46:20.106584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.891 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:04.891 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:04.892 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.892 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:05.150 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:05.150 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.150 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.150 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.150 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.150 10:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.409 nvme0n1 00:27:05.409 10:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:05.409 10:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.409 10:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.409 10:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.409 10:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:05.409 10:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.409 Running I/O for 2 seconds... 00:27:05.409 [2024-05-15 10:46:21.280919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.409 [2024-05-15 10:46:21.280975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.409 [2024-05-15 10:46:21.280991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.667 [2024-05-15 10:46:21.290387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.667 [2024-05-15 10:46:21.290418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.667 [2024-05-15 10:46:21.290430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.667 [2024-05-15 10:46:21.299190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.667 [2024-05-15 10:46:21.299220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.667 [2024-05-15 10:46:21.299231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.667 [2024-05-15 10:46:21.310781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.667 [2024-05-15 10:46:21.310805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.667 [2024-05-15 10:46:21.310816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.667 [2024-05-15 10:46:21.322274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.667 [2024-05-15 10:46:21.322299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.667 [2024-05-15 10:46:21.322309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.667 [2024-05-15 10:46:21.331080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.667 [2024-05-15 10:46:21.331105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.667 [2024-05-15 10:46:21.331116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.667 [2024-05-15 10:46:21.343353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.343380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.355725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.355751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.355761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.364132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.364156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.364166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.375970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.375994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.376003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.388832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.388856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.388865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.401310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.401338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.401348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.409655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.409678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.409688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.420037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.420066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.420089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.429259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.429282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.429292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.440619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.440643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.440653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.452110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.452135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.452145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.460759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.460782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.460792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.472508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.472531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.472541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.483370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.483394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.483404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.491925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.491959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.502754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.502779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.502789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.512883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.512907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.512916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.521821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.521845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.521855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.668 [2024-05-15 10:46:21.530771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.668 [2024-05-15 10:46:21.530794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.668 [2024-05-15 10:46:21.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.927 [2024-05-15 10:46:21.541203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.927 [2024-05-15 10:46:21.541227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-05-15 10:46:21.541237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.927 [2024-05-15 10:46:21.552516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.552545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.552555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.561340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.561369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.561381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.573551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.573576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.573586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.582509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.582533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.582543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.593497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.593521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.593535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.602324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.602354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.602364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.614203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.614228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.614238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.623294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.623320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.623330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.633235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.633259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.633269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.642757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.642782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.642792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.655282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.655307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.655317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.665275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.665301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.674018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.674041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.674055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.686592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.686616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.686625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.698695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.698721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.698731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.710960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.710984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.710993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.723224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.723247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.723256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.735562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.735586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.735596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.747894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.747918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.747927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.756558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.756581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.756591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.767416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.767439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.767448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.777711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.777735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.777748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.786316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.786346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.928 [2024-05-15 10:46:21.797019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:05.928 [2024-05-15 10:46:21.797053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-05-15 10:46:21.797063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.805611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.805641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.805651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.814871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.814897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.814907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.824810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.824836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.824846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.835536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.835561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.835571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.848141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.848166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.848176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.860193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.860219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.860228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.872338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.187 [2024-05-15 10:46:21.872375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.187 [2024-05-15 10:46:21.872385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.187 [2024-05-15 10:46:21.880776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.880801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.880812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.893169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.893194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.893203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.905143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.905169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.905179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.917673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.917699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.917709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.928908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.928932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.928942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.937841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.937866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.937876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.948448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.948473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.948482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.956663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.956687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.956700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.966552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.966575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.966585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.976659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.976682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.976691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.985131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.985155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.985165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:21.997409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:21.997435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:21.997445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:22.008187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:22.008211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:22.008220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:22.016382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:22.016405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:22.016414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:22.025790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:22.025814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:22.025824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:22.035788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:22.035814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:22.035824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:22.045281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:22.045311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:22.045321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.188 [2024-05-15 10:46:22.055703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.188 [2024-05-15 10:46:22.055733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.188 [2024-05-15 10:46:22.055742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.064149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.064180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.064191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.073635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.073660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.073670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.082903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.082926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.082936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.093055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.093079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.093088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.103540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.103570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.103580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.112333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.112359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.112369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.121413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.121437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.121452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.130090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.130115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.130125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.139958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.139981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.139990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.148523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.148547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.148556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.159269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.159294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.159303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.168636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.168659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.168669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.177840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.177868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.177878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.187384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.187409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.187418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.196865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.196891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.196901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.205566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.205594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.205604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.216726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.216750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.450 [2024-05-15 10:46:22.216760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.450 [2024-05-15 10:46:22.225924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.450 [2024-05-15 10:46:22.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.225957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.237083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.237108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.237117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.248208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.248233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.248242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.256689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.256715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.256724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.268326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.268350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.268359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.279844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.279870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.279879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.288001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.288024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.288038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.299373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.299399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.299409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.307602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.307625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.307635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.451 [2024-05-15 10:46:22.321487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.451 [2024-05-15 10:46:22.321526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.451 [2024-05-15 10:46:22.321541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.333293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.333321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.333331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.342780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.342805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.342815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.354076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.354102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.365943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.365970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.365980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.374763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.374789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.374799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.385486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.385517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.385526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.397741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.397769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.397779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.410098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.410122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.410132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.421525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.421551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.421560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.430062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.430088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.430098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.443410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.443436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.443445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.713 [2024-05-15 10:46:22.451246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.713 [2024-05-15 10:46:22.451272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.713 [2024-05-15 10:46:22.451283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.462396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.462421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.462431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.473168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.473193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.473203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.481486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.481512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.481521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.491259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.491285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.491295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.500728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.500760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.510412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.510440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.510450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.519890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.519920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.519932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.532619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.532650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.532663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.541196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.541221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.541231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.553167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.553193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.553202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.563705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.563736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.563746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.572933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.572968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.572980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.714 [2024-05-15 10:46:22.583369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.714 [2024-05-15 10:46:22.583398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.714 [2024-05-15 10:46:22.583408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.594858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.594887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.594897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.603521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.603548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.603558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.613475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.613504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.613514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.623513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.623540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.623550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.632744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.632769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.632779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.642488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.642514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.642524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.651641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.651667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.651677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.660722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.660747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.660757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.669973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.670004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.670014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.679319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.679345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.679355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.688650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.688676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.688686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.976 [2024-05-15 10:46:22.698351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.976 [2024-05-15 10:46:22.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.976 [2024-05-15 10:46:22.698390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.708247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.708275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.708284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.717694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.717720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.717729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.727005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.727035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.727059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.735217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.735243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.735253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.746255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.746284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.746294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.756002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.756027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.756037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.765584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.765609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.765618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.774737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.774762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.774772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.783700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.783725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.783735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.794932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.794962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.794973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.804170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.804198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.804208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.815596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.815623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.815633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.826475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.826507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.826518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.834528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.834555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.834566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.977 [2024-05-15 10:46:22.845953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:06.977 [2024-05-15 10:46:22.845981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.977 [2024-05-15 10:46:22.845991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.856582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.856609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.856619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.867562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.867587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.867596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.876227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.876251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.876260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.886433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.886455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.886465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.898747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.898785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.898795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.907898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.907934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.918372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.918396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.918405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.929440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.929464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.929473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.937565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.937589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.937599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.949232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.949256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.949266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.960321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.960344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.960353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.972693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.972717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.972725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.981772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.981796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.981805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:22.992316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:22.992340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:22.992349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.002950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.002979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.002989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.011324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.011348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.011358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.022433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.022458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.022468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.031572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.031596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.031606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.040249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.040274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.040283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.050979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.051002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.051011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.059125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.059159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.059170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.070531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.070555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.070569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.080699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.080726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.080736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.091948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.091976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.091986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.239 [2024-05-15 10:46:23.101012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.239 [2024-05-15 10:46:23.101038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.239 [2024-05-15 10:46:23.101052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.113458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.113484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.113494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.125145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.125169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.133841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.133864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.133874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.145327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.145350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.145359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.155476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.155500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.155510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.164672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.164723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.164733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.175925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.175950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.175960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.186771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.186795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.186804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.195631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.195657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.195666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.206452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.206476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.206485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.215336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.215361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.226507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.226530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.226540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.236575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.236599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.236609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.245153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.245178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.245193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 [2024-05-15 10:46:23.255493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:07.498 [2024-05-15 10:46:23.255518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.498 [2024-05-15 10:46:23.255528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.498 00:27:07.498 Latency(us) 00:27:07.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.498 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:07.498 nvme0n1 : 2.00 24815.95 96.94 0.00 0.00 5152.78 2586.95 16625.45 00:27:07.498 =================================================================================================================== 00:27:07.498 Total : 24815.95 96.94 0.00 0.00 5152.78 2586.95 16625.45 00:27:07.498 0 00:27:07.498 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:07.498 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:07.498 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:07.498 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:07.498 | .driver_specific 00:27:07.498 | .nvme_error 00:27:07.498 | .status_code 00:27:07.498 | .command_transient_transport_error' 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2848131 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2848131 ']' 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2848131 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2848131 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2848131' 00:27:07.756 killing process with pid 2848131 00:27:07.756 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2848131 00:27:07.757 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.757 00:27:07.757 Latency(us) 00:27:07.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.757 =================================================================================================================== 00:27:07.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.757 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2848131 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2848747 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2848747 /var/tmp/bperf.sock 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2848747 ']' 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.015 10:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:08.015 [2024-05-15 10:46:23.854174] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:08.015 [2024-05-15 10:46:23.854260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848747 ] 00:27:08.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.015 Zero copy mechanism will not be used. 00:27:08.273 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.273 [2024-05-15 10:46:23.941496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.273 [2024-05-15 10:46:24.032400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.843 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.104 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.104 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.104 10:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.363 nvme0n1 00:27:09.363 10:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:09.363 10:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.363 10:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.363 10:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.363 10:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:09.363 10:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:09.363 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:09.363 Zero copy mechanism will not be used. 00:27:09.363 Running I/O for 2 seconds... 00:27:09.363 [2024-05-15 10:46:25.108273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.108328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.108344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.113923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.113956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.113968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.119402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.119429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.119439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.124543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.124567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.124578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.130193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.130217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.130227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.135900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.135924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.135933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.142531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.142556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.142565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.150474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.150500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.150512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.157639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.157665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.157678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.163761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.163786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.163796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.168650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.168673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.168682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.173585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.173608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.173617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.178405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.178426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.178436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.184011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.184033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.184056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.189247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.189269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.189278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.194982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.363 [2024-05-15 10:46:25.195005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.363 [2024-05-15 10:46:25.195014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.363 [2024-05-15 10:46:25.200513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.200544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.200554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.364 [2024-05-15 10:46:25.206002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.206025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.206035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.364 [2024-05-15 10:46:25.211708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.211730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.211740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.364 [2024-05-15 10:46:25.217256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.217278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.217288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.364 [2024-05-15 10:46:25.222865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.222889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.222898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.364 [2024-05-15 10:46:25.228700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.228724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.228733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.364 [2024-05-15 10:46:25.234507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.364 [2024-05-15 10:46:25.234531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.364 [2024-05-15 10:46:25.234540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.624 [2024-05-15 10:46:25.240502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.624 [2024-05-15 10:46:25.240526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-05-15 10:46:25.240536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.624 [2024-05-15 10:46:25.245927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.624 [2024-05-15 10:46:25.245949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-05-15 10:46:25.245959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.624 [2024-05-15 10:46:25.252382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.252405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.252414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.257415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.257448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.261827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.261849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.261859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.266613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.266640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.266651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.271587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.271616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.271629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.276790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.276817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.276829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.281809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.281833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.281845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.286649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.286672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.286682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.291413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.291438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.291452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.296362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.296385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.296395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.301138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.301163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.301172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.305934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.305958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.305967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.310674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.310696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.310706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.315439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.315462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.315472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.320117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.320140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.320150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.323230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.323253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.323263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.328447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.328470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.328479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.334436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.334459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.334469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.339555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.339578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.339587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.344856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.344880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.344889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.350119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.350142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.350152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.355892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.355916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.355926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.360862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.360887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.360896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.365254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.365284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.365294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.625 [2024-05-15 10:46:25.369830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.625 [2024-05-15 10:46:25.369855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-05-15 10:46:25.369865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.374255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.374282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.374297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.378245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.378268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.378278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.381896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.381919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.381929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.385748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.385771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.385781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.389795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.389819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.389829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.392956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.392979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.392989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.395204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.395234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.395244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.398922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.398948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.398957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.402973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.402997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.403007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.408313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.408346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.414893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.414927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.421232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.421256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.421265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.429295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.429324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.429336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.436222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.436250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.436259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.441570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.441594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.441604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.446381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.446404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.446413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.451212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.451245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.451255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.456135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.456160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.456174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.461160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.461184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.461194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.465899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.465923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.465932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.470620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.470643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.470652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.475466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.475489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.475499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.479987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.480011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.480020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.484743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.484766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.484776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.489378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.626 [2024-05-15 10:46:25.489401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-05-15 10:46:25.489410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.626 [2024-05-15 10:46:25.493887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.627 [2024-05-15 10:46:25.493911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-05-15 10:46:25.493919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.498271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.498297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.498307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.503154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.503177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.503187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.509081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.509114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.514320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.514343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.514353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.519203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.519227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.519236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.523944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.523967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.523977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.528003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.528026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.886 [2024-05-15 10:46:25.530522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.886 [2024-05-15 10:46:25.530543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.886 [2024-05-15 10:46:25.530552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.536151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.536175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.536188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.542306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.542329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.542338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.546779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.546802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.546811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.550788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.550811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.550820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.554611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.554634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.554645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.558589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.558611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.558620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.562512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.562540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.562549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.566412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.566435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.566444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.570251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.570274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.570284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.574170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.574196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.574205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.578594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.578617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.578626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.583087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.583110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.583121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.588312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.588335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.588344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.593942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.593964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.593974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.599449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.599490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.604197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.604222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.604232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.608805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.608829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.608838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.612751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.612775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.612788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.616674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.616707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.616719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.620664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.620701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.624636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.624661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.628935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.628961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.628971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.632094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.632120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.632129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.637683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.637707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.637716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.644462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.887 [2024-05-15 10:46:25.644486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.887 [2024-05-15 10:46:25.644495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.887 [2024-05-15 10:46:25.651255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.651277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.651286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.658028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.658063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.658072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.664835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.664874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.671571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.671594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.671604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.678240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.678263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.678273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.684979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.685003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.685012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.691751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.691774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.691783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.698538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.698564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.705327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.705350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.705359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.712124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.712147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.712157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.718914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.718938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.718947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.725740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.725764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.725773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.732543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.732566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.732575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.738492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.738516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.738525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.742825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.742848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.742859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.746898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.746922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.746931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.751038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.751066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.751074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.888 [2024-05-15 10:46:25.755066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:09.888 [2024-05-15 10:46:25.755089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.888 [2024-05-15 10:46:25.755098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.148 [2024-05-15 10:46:25.759001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.148 [2024-05-15 10:46:25.759030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.148 [2024-05-15 10:46:25.759039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.148 [2024-05-15 10:46:25.763406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.148 [2024-05-15 10:46:25.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.148 [2024-05-15 10:46:25.763440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.148 [2024-05-15 10:46:25.768789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.148 [2024-05-15 10:46:25.768814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.148 [2024-05-15 10:46:25.768824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.148 [2024-05-15 10:46:25.773967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.148 [2024-05-15 10:46:25.773993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.148 [2024-05-15 10:46:25.774003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.148 [2024-05-15 10:46:25.780423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.148 [2024-05-15 10:46:25.780449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.148 [2024-05-15 10:46:25.780460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.148 [2024-05-15 10:46:25.786514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.148 [2024-05-15 10:46:25.786542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.148 [2024-05-15 10:46:25.786555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.791678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.791709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.791721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.796804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.796832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.796844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.801505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.801534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.801546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.806711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.806737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.806749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.812064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.812090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.812102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.817866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.817894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.817906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.823681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.823707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.823719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.829166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.829191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.829203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.834896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.834922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.834941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.840143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.840167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.840178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.845065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.845089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.845099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.849259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.849287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.849297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.853944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.853966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.853975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.858352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.858375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.858384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.862291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.862314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.862324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.865082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.865104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.865113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.869174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.869207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.869218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.873914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.873940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.873951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.879143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.879168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.879178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.884568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.884593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.884603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.890078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.890105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.890116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.895578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.895602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.895612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.901761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.901787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.901798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.906454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.906477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.906487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.910742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.910765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.910775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.914297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.914321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.914330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.918164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.149 [2024-05-15 10:46:25.918188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.149 [2024-05-15 10:46:25.918197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.149 [2024-05-15 10:46:25.922057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.922081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.922090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.925989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.926013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.926027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.929885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.929908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.929917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.933827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.933849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.937692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.937715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.937724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.941573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.941596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.941605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.945382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.945404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.945414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.949332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.949355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.949365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.952059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.952079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.952088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.955015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.955037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.955051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.958818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.958841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.958851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.962598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.962621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.962630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.966574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.966608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.969991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.970015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.970024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.973887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.973911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.973920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.977931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.977954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.977963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.982142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.982165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.982175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.986298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.986321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.988632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.988652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.988665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.992211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.992233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.992242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.995889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.995912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.995922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:25.999925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:25.999950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:25.999960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:26.004724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:26.004747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:26.004756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:26.010836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:26.010859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:26.010869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.150 [2024-05-15 10:46:26.017608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.150 [2024-05-15 10:46:26.017658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.150 [2024-05-15 10:46:26.017668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.413 [2024-05-15 10:46:26.024410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.413 [2024-05-15 10:46:26.024434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.413 [2024-05-15 10:46:26.024444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.413 [2024-05-15 10:46:26.031236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.413 [2024-05-15 10:46:26.031260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.413 [2024-05-15 10:46:26.031270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.413 [2024-05-15 10:46:26.038159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.413 [2024-05-15 10:46:26.038184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.413 [2024-05-15 10:46:26.038194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.413 [2024-05-15 10:46:26.044947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.413 [2024-05-15 10:46:26.044971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.413 [2024-05-15 10:46:26.044981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.413 [2024-05-15 10:46:26.051708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.413 [2024-05-15 10:46:26.051731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.413 [2024-05-15 10:46:26.051740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.413 [2024-05-15 10:46:26.058367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.413 [2024-05-15 10:46:26.058390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.413 [2024-05-15 10:46:26.058399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.065141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.065164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.065173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.071900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.071924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.071933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.078682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.078709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.078719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.085442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.085465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.085475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.092235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.092258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.092272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.099041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.099071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.099080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.105702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.105726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.105735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.111815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.111838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.111847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.116255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.116278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.121316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.121349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.121362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.126414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.126440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.126451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.131440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.131466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.131476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.136519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.136543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.136553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.141607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.141631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.141641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.146671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.146694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.146704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.151596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.151618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.151627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.155595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.155617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.155627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.159616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.159638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.159648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.163954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.163977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.163986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.169378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.169402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.169411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.176161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.176185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.176194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.181502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.181526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.181540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.187105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.187129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.191163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.191184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.191193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.195174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.195197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.195206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.198874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.198899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.198909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.203440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.203462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.414 [2024-05-15 10:46:26.203472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.414 [2024-05-15 10:46:26.210162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.414 [2024-05-15 10:46:26.210186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.210196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.215320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.215343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.215352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.219340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.219363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.219372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.222112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.222134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.222143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.227242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.227263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.227273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.232931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.232954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.232963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.239746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.239768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.239778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.246050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.246072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.246081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.253868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.253891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.253901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.260841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.260863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.260872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.265461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.265483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.265492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.269641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.269663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.269676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.273702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.273724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.273733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.277813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.277835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.277844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.281786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.281808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.281817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.415 [2024-05-15 10:46:26.285822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.415 [2024-05-15 10:46:26.285844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.415 [2024-05-15 10:46:26.285853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.290986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.291009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.291019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.297486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.297509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.297518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.303314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.303342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.303353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.308264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.308289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.313141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.313169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.313182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.318155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.318187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.323019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.323041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.323055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.328159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.328182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.328191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.333141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.333163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.333173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.337042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.337068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.337077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.341091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.341113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.341122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.345836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.345863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.345873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.350158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.676 [2024-05-15 10:46:26.350181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.676 [2024-05-15 10:46:26.350197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.676 [2024-05-15 10:46:26.354122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.354144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.354153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.358025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.358051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.358061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.362040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.362065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.362074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.366661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.366684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.366693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.370644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.370672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.370682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.374795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.374819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.374828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.379757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.379782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.379792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.386570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.386595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.386604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.392901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.392929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.392940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.400473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.400500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.400510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.407446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.407470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.407479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.413738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.413761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.419110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.419132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.419142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.424167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.424189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.424198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.429128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.429150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.429160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.433840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.433863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.438508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.438531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.438544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.443911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.443933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.443943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.448658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.448680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.448689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.453635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.453657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.453666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.459797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.459819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.459828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.464909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.464932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.464941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.468910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.468932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.468949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.473742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.473764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.473774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.478228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.478250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.478259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.483009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.483035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.483049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.489916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.489939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.677 [2024-05-15 10:46:26.489948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.677 [2024-05-15 10:46:26.496463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.677 [2024-05-15 10:46:26.496485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.496495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.504116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.504141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.504150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.511246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.511269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.511278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.515835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.515857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.515866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.519900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.519922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.519931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.523953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.523976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.523986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.527969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.527991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.528000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.531638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.531660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.531670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.535182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.535205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.535214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.539152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.539183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.544100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.544122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.544131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.678 [2024-05-15 10:46:26.547693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.678 [2024-05-15 10:46:26.547714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.678 [2024-05-15 10:46:26.547723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.550059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.550080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.550089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.554695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.554716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.554725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.560760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.560783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.560792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.566735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.566760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.566769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.572257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.572280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.572290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.579205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.579228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.579237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.584337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.584358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.584368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.588756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.588777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.588786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.593640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.593661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.593670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.598326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.598351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.598361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.602762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.602784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.602793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.608454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.608476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.608485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.615051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.941 [2024-05-15 10:46:26.615073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.941 [2024-05-15 10:46:26.615083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.941 [2024-05-15 10:46:26.620628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.620650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.620659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.625582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.625614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.625627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.630309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.630336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.630347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.635228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.635253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.635263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.639052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.639077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.639087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.641252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.641274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.641283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.645073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.645096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.645106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.648938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.648966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.648975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.652995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.653018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.653027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.656541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.656564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.656573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.660203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.660227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.664102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.664124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.664133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.668001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.668024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.668034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.671878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.671910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.671919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.675825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.675849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.675858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.680057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.680090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.684625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.684647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.684656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.688249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.688272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.688281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.692995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.693018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.693027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.695614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.695636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.695645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.698496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.698523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.698534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.702387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.702411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.702421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.706296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.706319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.706329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.710187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.710220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.714084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.714107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.714121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.717749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.717771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.717781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.721253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.721276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.721285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.942 [2024-05-15 10:46:26.723888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.942 [2024-05-15 10:46:26.723910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.942 [2024-05-15 10:46:26.723920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.728361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.728384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.728393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.732167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.732190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.732199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.736304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.736327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.736337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.740632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.740655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.740665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.744539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.744562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.744571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.748438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.748461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.748471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.752381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.752405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.752414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.756296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.756319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.756328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.760142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.760164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.760173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.764017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.764039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.764057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.767896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.767919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.767928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.771786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.771809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.771818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.775715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.775737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.775746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.779646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.779668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.779681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.783587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.783607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.783616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.787629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.787656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.787666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.792199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.792223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.792232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.796684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.796708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.796717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.800585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.800613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.800624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.804247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.804270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.804280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.808065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.808089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.808099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.943 [2024-05-15 10:46:26.812002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:10.943 [2024-05-15 10:46:26.812025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.943 [2024-05-15 10:46:26.812035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.205 [2024-05-15 10:46:26.815964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.205 [2024-05-15 10:46:26.815988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.205 [2024-05-15 10:46:26.815997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.205 [2024-05-15 10:46:26.820880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.205 [2024-05-15 10:46:26.820903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.205 [2024-05-15 10:46:26.820912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.205 [2024-05-15 10:46:26.827449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.205 [2024-05-15 10:46:26.827474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.205 [2024-05-15 10:46:26.827484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.205 [2024-05-15 10:46:26.833631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.205 [2024-05-15 10:46:26.833656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.205 [2024-05-15 10:46:26.833665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.839390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.839414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.839423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.843897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.843919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.843928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.847788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.847812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.847822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.851666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.851690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.851699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.855648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.855672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.855685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.860579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.860603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.860612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.865394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.865419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.865428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.868228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.868251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.868260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.873031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.873060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.873069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.878153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.878186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.878197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.882931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.882958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.882968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.887917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.887943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.887953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.892985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.893010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.893020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.898145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.898169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.898179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.902908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.902933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.902942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.907713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.907737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.907747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.912617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.912641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.912651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.916926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.916950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.921750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.921774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.921784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.925910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.925933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.925942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.929956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.929980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.929989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.934079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.934103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.934122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.938781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.938809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.938821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.943265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.943288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.943299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.947360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.947383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.947393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.951449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.951472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.951482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.955554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.206 [2024-05-15 10:46:26.955578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.206 [2024-05-15 10:46:26.955587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.206 [2024-05-15 10:46:26.959609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.959632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.959644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.963728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.963751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.963760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.967836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.967861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.967871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.972819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.972851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.972861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.979153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.979178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.979188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.985349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.985373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.985383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.990453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.990477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.990487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:26.995581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:26.995605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:26.995615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.000738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.000765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.000776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.005613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.005639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.005649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.011513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.011537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.011547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.015322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.015347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.015362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.021204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.021229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.021238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.026339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.026365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.026376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.031989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.032013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.032023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.037864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.037889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.037898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.043725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.043749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.043759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.049854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.049879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.049897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.055500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.055533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.060825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.060849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.060858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.065791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.065818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.065827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.070611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.070635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.070644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.207 [2024-05-15 10:46:27.076475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.207 [2024-05-15 10:46:27.076499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.207 [2024-05-15 10:46:27.076509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.468 [2024-05-15 10:46:27.081797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.468 [2024-05-15 10:46:27.081824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.469 [2024-05-15 10:46:27.081833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.469 [2024-05-15 10:46:27.088325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.469 [2024-05-15 10:46:27.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.469 [2024-05-15 10:46:27.088361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.469 [2024-05-15 10:46:27.093208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.469 [2024-05-15 10:46:27.093233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.469 [2024-05-15 10:46:27.093242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.469 [2024-05-15 10:46:27.097585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.469 [2024-05-15 10:46:27.097608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.469 [2024-05-15 10:46:27.097617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.469 [2024-05-15 10:46:27.101319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:11.469 [2024-05-15 10:46:27.101345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.469 [2024-05-15 10:46:27.101355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.469 00:27:11.469 Latency(us) 00:27:11.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:11.469 nvme0n1 : 2.00 6324.16 790.52 0.00 0.00 2526.64 420.38 8278.23 00:27:11.469 =================================================================================================================== 00:27:11.469 Total : 6324.16 790.52 0.00 0.00 2526.64 420.38 8278.23 00:27:11.469 0 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:11.469 | .driver_specific 00:27:11.469 | .nvme_error 00:27:11.469 | .status_code 00:27:11.469 | .command_transient_transport_error' 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 408 > 0 )) 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2848747 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2848747 ']' 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2848747 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2848747 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2848747' 00:27:11.469 killing process with pid 2848747 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2848747 00:27:11.469 Received shutdown signal, test time was about 2.000000 seconds 00:27:11.469 00:27:11.469 Latency(us) 00:27:11.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.469 =================================================================================================================== 00:27:11.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:11.469 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2848747 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2849634 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2849634 /var/tmp/bperf.sock 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2849634 ']' 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.034 10:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:12.034 [2024-05-15 10:46:27.705013] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:12.034 [2024-05-15 10:46:27.705102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849634 ] 00:27:12.035 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.035 [2024-05-15 10:46:27.788583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.035 [2024-05-15 10:46:27.877971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.601 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:12.601 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:12.601 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.601 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.860 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:12.860 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.860 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.860 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.860 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.860 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.121 nvme0n1 00:27:13.121 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:13.121 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.121 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.121 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.121 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:13.121 10:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.121 Running I/O for 2 seconds... 00:27:13.121 [2024-05-15 10:46:28.847538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:27:13.121 [2024-05-15 10:46:28.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.121 [2024-05-15 10:46:28.848363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.856347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:13.122 [2024-05-15 10:46:28.857070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.857102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.865974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:27:13.122 [2024-05-15 10:46:28.866815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.875611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:13.122 [2024-05-15 10:46:28.876568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.876594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.885223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e84c0 00:27:13.122 [2024-05-15 10:46:28.886316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.886343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.894839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:13.122 [2024-05-15 10:46:28.896041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.896074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.904436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:27:13.122 [2024-05-15 10:46:28.905765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.905794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.914022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:27:13.122 [2024-05-15 10:46:28.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.915503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.920519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:27:13.122 [2024-05-15 10:46:28.921108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.921133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.929791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:27:13.122 [2024-05-15 10:46:28.930376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.930399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.938469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:13.122 [2024-05-15 10:46:28.939043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.939079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.948026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:13.122 [2024-05-15 10:46:28.948735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.948763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.957605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:13.122 [2024-05-15 10:46:28.958428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.967169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:13.122 [2024-05-15 10:46:28.968121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.968146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.976726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:27:13.122 [2024-05-15 10:46:28.977794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.977819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.122 [2024-05-15 10:46:28.986292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:13.122 [2024-05-15 10:46:28.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.122 [2024-05-15 10:46:28.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:28.995877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:27:13.384 [2024-05-15 10:46:28.997210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:28.997235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.005446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:27:13.384 [2024-05-15 10:46:29.006887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.006913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.011897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:13.384 [2024-05-15 10:46:29.012479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.012502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.021451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:27:13.384 [2024-05-15 10:46:29.022162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.022186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.031011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:27:13.384 [2024-05-15 10:46:29.031840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.031873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.040578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:27:13.384 [2024-05-15 10:46:29.041530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.041554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.049800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:27:13.384 [2024-05-15 10:46:29.050750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.050775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.058312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:27:13.384 [2024-05-15 10:46:29.059248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.059273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.067857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:27:13.384 [2024-05-15 10:46:29.068922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.068947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.077416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:27:13.384 [2024-05-15 10:46:29.078606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.078629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.086968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:13.384 [2024-05-15 10:46:29.088287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.088311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.096538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:27:13.384 [2024-05-15 10:46:29.097969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.097996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.102989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:13.384 [2024-05-15 10:46:29.103563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.384 [2024-05-15 10:46:29.103592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:13.384 [2024-05-15 10:46:29.112527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:13.384 [2024-05-15 10:46:29.113224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.113250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.121745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:27:13.385 [2024-05-15 10:46:29.122439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.122465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.130251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:13.385 [2024-05-15 10:46:29.130931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.130955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.139794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:13.385 [2024-05-15 10:46:29.140604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.149422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:27:13.385 [2024-05-15 10:46:29.150436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.150462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.160761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:27:13.385 [2024-05-15 10:46:29.161959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.161986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.170643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:27:13.385 [2024-05-15 10:46:29.171818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.171843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.180173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:27:13.385 [2024-05-15 10:46:29.181472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.181499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.189722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:27:13.385 [2024-05-15 10:46:29.191154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.191178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.196194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:27:13.385 [2024-05-15 10:46:29.196749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.196772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.205718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:27:13.385 [2024-05-15 10:46:29.206402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.206427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.214933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:27:13.385 [2024-05-15 10:46:29.215612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.215635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.223410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:27:13.385 [2024-05-15 10:46:29.224076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.224099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.232928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:13.385 [2024-05-15 10:46:29.233726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.233750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.242454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:27:13.385 [2024-05-15 10:46:29.243378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.243401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:13.385 [2024-05-15 10:46:29.251985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:13.385 [2024-05-15 10:46:29.253027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.385 [2024-05-15 10:46:29.253055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.261516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:27:13.645 [2024-05-15 10:46:29.262683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.262707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.271581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:27:13.645 [2024-05-15 10:46:29.273081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.273110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.282955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:27:13.645 [2024-05-15 10:46:29.284463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.284487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.292568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.645 [2024-05-15 10:46:29.294117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.294140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.299032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:13.645 [2024-05-15 10:46:29.299720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.299743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.308273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:27:13.645 [2024-05-15 10:46:29.308940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.308964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.317325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:27:13.645 [2024-05-15 10:46:29.317986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.318009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.326405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:27:13.645 [2024-05-15 10:46:29.327069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.327092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.334883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:27:13.645 [2024-05-15 10:46:29.335541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.335568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.344402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:27:13.645 [2024-05-15 10:46:29.345186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.345210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.353923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:13.645 [2024-05-15 10:46:29.354836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.354863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.363481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:27:13.645 [2024-05-15 10:46:29.364521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.364550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.373059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:27:13.645 [2024-05-15 10:46:29.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.374240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.382599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:27:13.645 [2024-05-15 10:46:29.383875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.383900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.392137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:13.645 [2024-05-15 10:46:29.393543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.393566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.401685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:27:13.645 [2024-05-15 10:46:29.403231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.403256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.408138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:27:13.645 [2024-05-15 10:46:29.408794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.408815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.417669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:13.645 [2024-05-15 10:46:29.418457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.418481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.426869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:27:13.645 [2024-05-15 10:46:29.427656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.427680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.436085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:27:13.645 [2024-05-15 10:46:29.436859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.436882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.444431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:27:13.645 [2024-05-15 10:46:29.445212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.445234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.453991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:13.645 [2024-05-15 10:46:29.454885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.454908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.463529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:13.645 [2024-05-15 10:46:29.464544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.464567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.473078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:27:13.645 [2024-05-15 10:46:29.474216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.474239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.482630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:27:13.645 [2024-05-15 10:46:29.483890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.645 [2024-05-15 10:46:29.483913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:13.645 [2024-05-15 10:46:29.492154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:13.646 [2024-05-15 10:46:29.493539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.646 [2024-05-15 10:46:29.493562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:13.646 [2024-05-15 10:46:29.501702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:27:13.646 [2024-05-15 10:46:29.503214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.646 [2024-05-15 10:46:29.503240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.646 [2024-05-15 10:46:29.508170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:27:13.646 [2024-05-15 10:46:29.508809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.646 [2024-05-15 10:46:29.508831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:13.646 [2024-05-15 10:46:29.517377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:13.904 [2024-05-15 10:46:29.518012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.904 [2024-05-15 10:46:29.518036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:13.904 [2024-05-15 10:46:29.526565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:13.904 [2024-05-15 10:46:29.527208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.904 [2024-05-15 10:46:29.527231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:13.904 [2024-05-15 10:46:29.536267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.904 [2024-05-15 10:46:29.536644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.904 [2024-05-15 10:46:29.536665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.904 [2024-05-15 10:46:29.545641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.545810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.545833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.555056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.555227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.555247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.564464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.564633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.564654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.573855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.574026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.574055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.583284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.583452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.583473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.592688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.592855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.592876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.602090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.602260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.602283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.611490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.611657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.611680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.620884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.621065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.621101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.630283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.630450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.630474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.639676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.639846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.639868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.649069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.649238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.649260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.658468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.658638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.667926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.668098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.668120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.677340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.677508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.677528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.686741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.686910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.686931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.696137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.696303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.696324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.705504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.705672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.705695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.714913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.715086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.715116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.724292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.724461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.724483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.733695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.733864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.733888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.743088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.743279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.752469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.752638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.752659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.761832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.762000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.762021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.905 [2024-05-15 10:46:29.771252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:13.905 [2024-05-15 10:46:29.771422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.905 [2024-05-15 10:46:29.771443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.780635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.780804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.780825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.790058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.790226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.790246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.799454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.808844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.809011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.809034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.818252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.818428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.827642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.827809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.827830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.837034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.837206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.837226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.846427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.846595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.846616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.855857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.856028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.856051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.865251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.865421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.865444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.874649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.874821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.874846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.884060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.884241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.884264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.893456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.893624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.893645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.902842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.903010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.903034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.912241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.912409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.912430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.921645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.921812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.921834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.931033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.931211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.931232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.940524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.940696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.940717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.949927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.950098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.950120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.959317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.959488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.959509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.968714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.968881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.968903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.978097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.978271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.978291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.987533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.987704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.987725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:29.996924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:29.997096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:29.997118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:30.008443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:30.008670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.165 [2024-05-15 10:46:30.008712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.165 [2024-05-15 10:46:30.019985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.165 [2024-05-15 10:46:30.020163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.166 [2024-05-15 10:46:30.020191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.166 [2024-05-15 10:46:30.031101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.166 [2024-05-15 10:46:30.031318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.166 [2024-05-15 10:46:30.031353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.041489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.041660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.041682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.050909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.051084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.051107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.060315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.060483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.060505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.072529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.072750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.072785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.083188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.083358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.092602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.092772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.092794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.102028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.102201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.102225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.111452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.111622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.111646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.120877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.121055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.121078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.130299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.130469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.130496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.139708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.139878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.139903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.149139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.149307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.149334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.158524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.158693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.167952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.168127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.168149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.177738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.177953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.177977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.189425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.189594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.189614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.198837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.199006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.199027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.208285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.208453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.425 [2024-05-15 10:46:30.208477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.425 [2024-05-15 10:46:30.217691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.425 [2024-05-15 10:46:30.217859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.217880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.227117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.227286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.227307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.236568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.236743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.236772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.245989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.246162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.246183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.256239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.256418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.256439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.266557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.266737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.266759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.277077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.277257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.277279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.286938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.287114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.287135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.426 [2024-05-15 10:46:30.296365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.426 [2024-05-15 10:46:30.296535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.426 [2024-05-15 10:46:30.296556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.305773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.305945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.305969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.315190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.315361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.315382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.324609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.324777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.324799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.334015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.334190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.334211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.343427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.343596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.352838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.353005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.353026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.362946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.363135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.363157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.373307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.373494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.373528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.383637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.383822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.383851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.393949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.394139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.394162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.403577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.403753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.403777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.412983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.413165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.413187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.422406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.422576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.422598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.431834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.432006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.432027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.441261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.441431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.441451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.450688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.450858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.450880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.460102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.460271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.460292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.469519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.469688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.469709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.478932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.479106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.479127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.488336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.488504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.488526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.497752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.497920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.497941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.507168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.507343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.507366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.516574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.516742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.686 [2024-05-15 10:46:30.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.686 [2024-05-15 10:46:30.525986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.686 [2024-05-15 10:46:30.526161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.687 [2024-05-15 10:46:30.526183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.687 [2024-05-15 10:46:30.535384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.687 [2024-05-15 10:46:30.535553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.687 [2024-05-15 10:46:30.535575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.687 [2024-05-15 10:46:30.544786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.687 [2024-05-15 10:46:30.544953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.687 [2024-05-15 10:46:30.544975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.687 [2024-05-15 10:46:30.554193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.687 [2024-05-15 10:46:30.554363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.687 [2024-05-15 10:46:30.554385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.948 [2024-05-15 10:46:30.563622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.948 [2024-05-15 10:46:30.563790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.948 [2024-05-15 10:46:30.563815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.948 [2024-05-15 10:46:30.573031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.948 [2024-05-15 10:46:30.573204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.948 [2024-05-15 10:46:30.573226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.948 [2024-05-15 10:46:30.582465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.948 [2024-05-15 10:46:30.582634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.948 [2024-05-15 10:46:30.582656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.948 [2024-05-15 10:46:30.591849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.948 [2024-05-15 10:46:30.592016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.948 [2024-05-15 10:46:30.592037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.948 [2024-05-15 10:46:30.601288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.948 [2024-05-15 10:46:30.601457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.948 [2024-05-15 10:46:30.601480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.610688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.610856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.610878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.620250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.620458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.630427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.630597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.630619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.639831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.640001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.640027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.649198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.649373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.649396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.658625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.658816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.668011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.668210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.677408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.677576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.677597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.686802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.686969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.686990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.696192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.696361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.696382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.705574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.705742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.705764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.714950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.715122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.715144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.724334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.724501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.724527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.733703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.733870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.733893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.743090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.743259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.743280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.752484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.752652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.761868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.762037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.762063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.771247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.771416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.771437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.780629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.790006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.790178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.790199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.799404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.799570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.799591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.808787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.808961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.808983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.949 [2024-05-15 10:46:30.818186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:14.949 [2024-05-15 10:46:30.818354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.949 [2024-05-15 10:46:30.818376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.209 [2024-05-15 10:46:30.827577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:15.209 [2024-05-15 10:46:30.827745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.209 [2024-05-15 10:46:30.827767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.209 [2024-05-15 10:46:30.836961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:27:15.209 [2024-05-15 10:46:30.837132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.209 [2024-05-15 10:46:30.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.209 00:27:15.209 Latency(us) 00:27:15.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.209 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:15.209 nvme0n1 : 2.00 27023.74 105.56 0.00 0.00 4728.23 2311.01 13176.19 00:27:15.209 =================================================================================================================== 00:27:15.209 Total : 27023.74 105.56 0.00 0.00 4728.23 2311.01 13176.19 00:27:15.209 0 00:27:15.209 10:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.209 10:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.209 10:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.209 10:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.209 | .driver_specific 00:27:15.209 | .nvme_error 00:27:15.209 | .status_code 00:27:15.209 | .command_transient_transport_error' 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2849634 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2849634 ']' 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2849634 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2849634 00:27:15.209 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:15.210 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:15.210 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2849634' 00:27:15.210 killing process with pid 2849634 00:27:15.210 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2849634 00:27:15.210 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.210 00:27:15.210 Latency(us) 00:27:15.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.210 =================================================================================================================== 00:27:15.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.210 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2849634 00:27:15.781 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:15.781 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:15.781 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:15.781 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2850249 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2850249 /var/tmp/bperf.sock 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2850249 ']' 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.782 10:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:15.782 [2024-05-15 10:46:31.502502] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:15.782 [2024-05-15 10:46:31.502651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850249 ] 00:27:15.782 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.782 Zero copy mechanism will not be used. 00:27:15.782 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.782 [2024-05-15 10:46:31.629148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.042 [2024-05-15 10:46:31.721740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.610 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.181 nvme0n1 00:27:17.181 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:17.181 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.181 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.181 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.181 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:17.181 10:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:17.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:17.181 Zero copy mechanism will not be used. 00:27:17.181 Running I/O for 2 seconds... 00:27:17.181 [2024-05-15 10:46:32.850419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.850697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.850743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.855472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.855717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.855752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.861093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.861324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.861352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.867678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.867912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.867938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.873377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.873605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.873629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.878732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.878965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.878989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.883811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.883902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.883924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.891223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.891458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.891485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.898209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.898431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.898455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.903923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.904154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.904180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.908659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.908879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.908901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.913373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.913608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.913631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.918460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.918694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.918717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.923296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.923440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.923462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.927875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.928105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.928126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.932829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.933054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.933076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.939192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.939411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.939433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.944321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.944561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.944583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.949035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.949262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.949284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.181 [2024-05-15 10:46:32.953703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.181 [2024-05-15 10:46:32.953923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.181 [2024-05-15 10:46:32.953945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.958413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.958640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.958672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.963238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.963460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.963486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.968030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.968280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.972642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.972755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.972778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.977391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.977610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.977634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.982123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.982353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.982376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.986965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.987187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.987209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.991766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.991986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.992008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:32.996532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:32.996752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:32.996777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.001201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.001426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.005947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.006176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.006200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.010534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.010755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.010781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.015192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.015412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.015433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.019766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.019831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.024569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.024790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.024811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.029609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.029831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.029853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.034870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.035107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.035131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.039528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.039753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.039776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.046015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.046250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.046273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.182 [2024-05-15 10:46:33.050950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.182 [2024-05-15 10:46:33.051184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-05-15 10:46:33.051206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.055747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.055975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.055998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.060606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.060826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.060847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.066829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.067061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.067083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.072054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.072280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.072302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.077506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.077723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.077745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.082250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.082493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.088383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.088605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.088626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.093741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.093961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.093982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.099374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.099607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.099633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.105824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.106076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.106104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.443 [2024-05-15 10:46:33.112705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.443 [2024-05-15 10:46:33.112923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.443 [2024-05-15 10:46:33.112952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.118284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.118386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.118412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.123555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.123781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.123806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.128553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.128772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.128796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.133421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.133638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.133661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.139945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.140167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.140190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.146294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.146547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.153622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.153879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.153906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.161204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.161450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.161476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.168999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.169217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.169240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.177031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.177264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.177287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.184891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.185088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.185110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.193464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.193692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.193715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.201640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.201869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.201894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.210531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.210758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.210783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.218773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.219000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.219031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.228180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.228407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.228431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.236087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.236324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.236347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.242400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.242630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.242653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.248816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.249049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.249073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.255671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.255900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.255923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.261480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.261709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.261735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.266820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.267050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.267076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.271981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.272232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.272258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.277314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.277562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.277586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.282288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.282531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.282555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.287948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.288130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.288153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.294262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.294492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.444 [2024-05-15 10:46:33.294515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.444 [2024-05-15 10:46:33.299592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.444 [2024-05-15 10:46:33.299812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.445 [2024-05-15 10:46:33.299837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.445 [2024-05-15 10:46:33.306110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.445 [2024-05-15 10:46:33.306338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.445 [2024-05-15 10:46:33.306363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.445 [2024-05-15 10:46:33.312937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.445 [2024-05-15 10:46:33.313161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.445 [2024-05-15 10:46:33.313184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.321017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.321240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.321263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.328111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.328329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.328358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.334885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.335124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.335148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.342278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.342541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.342578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.349379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.349597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.349619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.356337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.356597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.356625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.363491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.363709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.363737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.370391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.370612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.370637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.377279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.377495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.377522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.384376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.384603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.384627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.390972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.391210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.391234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.396157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.396376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.396398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.400543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.400758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.400779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.405158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.405385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.405409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.409605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.409835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.409858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.414221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.414439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.414462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.418888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.419123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.419146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.424232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.424448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.424470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.428946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.429166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.429193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.433440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.433657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.433680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.437918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.438143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.438165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.442199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.442259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.442281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.446580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.446796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.446821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.450580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.450775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.450798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.455157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.455337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.455360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.708 [2024-05-15 10:46:33.458855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.708 [2024-05-15 10:46:33.459039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.708 [2024-05-15 10:46:33.459066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.463007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.463203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.463227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.467752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.467952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.467978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.472662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.472843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.472869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.477294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.477518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.477541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.481503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.481692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.481713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.484655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.484854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.487719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.487892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.487914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.491097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.491279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.494405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.494583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.494604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.497506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.497681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.497703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.501238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.501415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.501436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.505964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.506261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.506290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.511072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.511270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.511294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.516639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.516854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.516876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.522814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.522979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.523001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.527302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.527472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.527495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.530423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.530592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.530614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.533364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.533535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.533557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.536313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.536483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.536502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.539208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.539376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.539399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.542135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.542323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.545302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.545469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.545492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.549780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.549949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.549971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.553714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.553882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.553904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.557718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.557886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.557907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.561258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.561426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.709 [2024-05-15 10:46:33.561447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.709 [2024-05-15 10:46:33.564802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.709 [2024-05-15 10:46:33.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.710 [2024-05-15 10:46:33.564992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.710 [2024-05-15 10:46:33.568211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.710 [2024-05-15 10:46:33.568381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.710 [2024-05-15 10:46:33.568402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.710 [2024-05-15 10:46:33.571695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.710 [2024-05-15 10:46:33.571861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.710 [2024-05-15 10:46:33.571883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.710 [2024-05-15 10:46:33.575020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.710 [2024-05-15 10:46:33.575193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.710 [2024-05-15 10:46:33.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.710 [2024-05-15 10:46:33.578302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.710 [2024-05-15 10:46:33.578468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.710 [2024-05-15 10:46:33.578489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.970 [2024-05-15 10:46:33.581431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.970 [2024-05-15 10:46:33.581598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.970 [2024-05-15 10:46:33.581618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.970 [2024-05-15 10:46:33.585363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.585526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.585548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.589546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.589715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.592812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.592999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.596176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.596332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.596358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.599501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.599659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.599681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.603301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.603481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.603507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.607806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.608014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.608055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.612946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.613155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.613181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.618074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.618300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.618324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.623697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.623896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.623928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.628834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.629098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.629123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.633935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.634182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.634206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.638975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.639255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.639278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.643743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.643947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.643969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.647023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.647187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.647208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.649900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.650062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.650084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.653169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.653325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.653346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.656111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.656267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.656289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.659117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.659277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.659298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.662735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.971 [2024-05-15 10:46:33.662899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.971 [2024-05-15 10:46:33.662920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.971 [2024-05-15 10:46:33.667501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.667656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.667683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.670733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.670896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.670917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.673953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.674118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.674139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.677222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.677381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.677407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.680439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.680593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.680614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.683543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.683698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.683720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.686815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.686972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.686994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.689966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.690132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.690154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.693230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.693390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.693411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.696524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.696679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.696700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.699802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.699960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.699980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.703059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.703217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.703238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.706278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.706439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.706461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.709520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.709678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.709697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.712809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.712966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.712988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.716158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.716319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.716341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.719390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.719547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.719569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.722726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.722885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.722911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.726087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.726243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.726266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.972 [2024-05-15 10:46:33.729377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.972 [2024-05-15 10:46:33.729533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.972 [2024-05-15 10:46:33.729554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.732633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.732789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.732811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.735842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.735999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.736022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.739146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.739303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.739324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.742509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.742666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.742687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.745818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.745977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.745997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.749135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.749294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.749314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.752416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.752577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.752599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.755725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.755881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.755903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.758997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.759167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.759188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.762325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.762483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.762504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.765546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.765699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.765721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.769084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.769265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.772416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.772576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.772600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.775727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.775887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.775910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.778863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.779022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.779055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.782095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.782252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.782272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.785821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.785993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.786014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.789433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.789591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.789613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.792335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.792488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.792509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.795301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.973 [2024-05-15 10:46:33.795462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.973 [2024-05-15 10:46:33.795483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.973 [2024-05-15 10:46:33.798354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.798512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.798538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.802100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.802282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.802304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.806485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.806638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.806663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.809779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.809937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.809958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.813620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.813793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.813816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.818467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.818676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.818698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.823485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.823716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.823739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.828691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.828903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.828925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.834829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.835119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.835143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.839197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.839397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.839420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.974 [2024-05-15 10:46:33.842331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:17.974 [2024-05-15 10:46:33.842492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.974 [2024-05-15 10:46:33.842514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.845290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.845447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.845468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.848237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.848394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.848416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.851199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.851354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.851384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.854112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.854275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.854298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.856978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.857141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.857165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.859885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.860098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.863009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.863172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.863201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.866293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.866453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.866477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.870681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.870840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.870863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.874483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.874645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.874669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.877724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.877879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.881022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.881186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.881209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.884243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.884403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.884426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.887533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.887692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.887715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.890908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.891072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.891094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.894268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.894424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.894445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.897632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.897788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.901021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.901181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.904284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.904442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.904464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.907602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.907761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.907782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.910956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.911120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.911141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.914226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.914381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.914402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.234 [2024-05-15 10:46:33.917482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.234 [2024-05-15 10:46:33.917636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.234 [2024-05-15 10:46:33.917657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.921405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.921614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.926350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.926603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.926625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.931159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.931355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.931377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.936877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.937082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.937104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.942363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.942549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.942570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.946417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.946616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.946637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.949370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.949525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.949546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.952286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.952441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.952461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.955234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.955390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.955411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.958519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.958674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.958694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.961473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.961628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.961650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.964664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.964827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.964849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.968084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.968291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.968311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.972463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.972702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.972723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.977357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.977555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.977577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.983095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.983314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.983336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.988404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.993486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.993726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.993747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:33.998546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:33.998798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:33.998819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.003600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.003882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.003908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.008592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.008835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.008863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.013671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.013915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.013939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.018616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.018866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.018888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.023672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.023947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.023969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.028632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.028902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.028924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.033701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.033983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.034006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.039485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.039712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.235 [2024-05-15 10:46:34.039734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.235 [2024-05-15 10:46:34.046067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.235 [2024-05-15 10:46:34.046300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.046322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.050606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.050786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.050808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.054169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.054323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.054345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.057135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.057288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.057310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.060118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.060276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.060298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.063425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.063581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.063603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.066720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.066902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.066922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.071062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.071252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.071274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.075903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.076115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.076139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.080701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.080896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.080917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.085303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.085509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.085535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.090611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.090820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.090842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.097284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.097554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.097575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.236 [2024-05-15 10:46:34.102268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.236 [2024-05-15 10:46:34.102507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.236 [2024-05-15 10:46:34.102529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.107311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.107588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.107616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.112244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.112412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.112437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.117504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.117755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.117811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.122516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.122712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.122737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.127607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.127793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.127817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.132697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.132892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.132916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.137828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.138088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.138112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.142893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.143155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.143179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.147932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.148134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.148156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.152940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.153207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.158030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.495 [2024-05-15 10:46:34.158242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.495 [2024-05-15 10:46:34.158264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.495 [2024-05-15 10:46:34.162724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.162991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.163012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.167832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.168085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.168106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.172894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.173157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.173182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.177863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.178111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.178132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.182859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.183081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.183102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.187899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.188144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.188166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.192964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.193221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.193243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.198032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.198300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.198321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.203082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.203288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.203309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.208093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.208334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.208361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.213110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.213309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.213332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.218252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.218512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.218534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.223251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.223466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.223487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.228423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.228673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.228695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.233487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.233737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.233759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.238394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.238589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.238610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.243513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.243769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.243791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.248531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.248731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.248752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.253509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.253713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.253734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.258563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.258771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.258797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.263714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.263963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.268747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.268963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.268984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.273844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.274130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.274151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.278834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.279090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.279110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.283905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.284165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.284187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.288875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.289114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.289135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.293877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.294079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.496 [2024-05-15 10:46:34.294101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.496 [2024-05-15 10:46:34.298966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.496 [2024-05-15 10:46:34.299170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.299192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.304106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.304353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.304376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.309190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.309434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.309455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.314156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.314403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.314426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.319216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.319468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.319490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.324306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.324547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.324569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.329368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.329630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.334337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.334579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.334601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.339424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.339677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.339699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.344425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.344622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.344645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.349454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.349695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.349717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.354471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.354657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.354678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.359598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.359854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.359876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.497 [2024-05-15 10:46:34.364564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.497 [2024-05-15 10:46:34.364819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.497 [2024-05-15 10:46:34.364851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.369746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.370020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.370053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.374794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.374996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.375021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.379904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.380177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.380201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.384940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.385151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.385173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.390107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.390337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.390359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.395099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.395301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.395323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.400197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.400404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.400426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.405282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.756 [2024-05-15 10:46:34.405468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.756 [2024-05-15 10:46:34.405491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.756 [2024-05-15 10:46:34.410405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.410659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.410681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.415455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.415649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.415671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.420525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.420709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.420730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.425617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.425810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.425832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.430724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.430985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.431006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.435685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.435934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.435956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.440728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.440915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.440947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.445855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.446099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.446122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.450906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.451176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.451198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.455885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.456143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.456167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.460894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.461093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.461115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.465928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.466173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.466195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.471006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.471241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.471262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.476048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.476323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.476348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.481182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.481412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.481433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.486235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.486482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.486505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.491268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.491476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.491499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.496274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.496518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.496540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.501362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.501596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.501618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.506367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.506567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.506592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.511397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.511632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.511655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.516473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.516720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.516742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.521546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.521793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.521815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.526504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.526746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.526768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.531583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.531829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.531850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.536669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.536908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.536930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.541700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.541884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.541904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.757 [2024-05-15 10:46:34.546779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.757 [2024-05-15 10:46:34.546963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.757 [2024-05-15 10:46:34.546985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.551828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.552036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.552062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.556918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.557120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.557141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.561893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.562091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.567011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.567262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.567284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.572031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.572229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.572250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.577053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.577295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.577317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.582122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.582375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.582396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.587212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.587453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.587475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.592253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.592442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.592464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.597376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.597618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.597639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.602401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.602588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.602609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.607544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.607797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.607822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.612621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.612859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.612881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.617698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.617949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.622908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.623170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.623198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.758 [2024-05-15 10:46:34.627936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:18.758 [2024-05-15 10:46:34.628128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.758 [2024-05-15 10:46:34.628151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.633053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.633305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.633329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.638149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.638388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.638411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.643203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.643466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.643488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.648300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.648547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.648572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.653269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.653504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.653530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.658335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.658581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.658605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.663345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.663544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.663566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.668494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.668743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.668765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.673680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.673949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.673971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.678661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.678918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.678940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.683693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.683896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.683917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.688926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.689191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.689213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.693901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.694148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.694171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.698984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.699223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.699245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.703906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.704186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.704209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.709005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.709257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.709280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.713959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.019 [2024-05-15 10:46:34.714210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.019 [2024-05-15 10:46:34.714232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.019 [2024-05-15 10:46:34.718995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.719197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.719219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.724127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.724373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.724395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.729196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.729452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.729474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.734182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.734423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.739245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.739485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.739508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.744193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.744440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.744462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.749279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.749518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.749540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.754226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.754495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.759314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.759554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.759576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.764263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.764505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.764535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.769301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.769493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.769514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.774303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.774542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.774564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.779349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.779620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.784426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.784703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.784725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.789414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.789653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.789675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.794490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.794736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.794758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.799532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.799733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.799754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.804591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.804781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.804804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.809611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.809847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.809869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.814676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.814931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.814954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.819764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.820012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.824773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.824971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.824993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.829854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.830058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.830080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.834963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.835155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.835177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.020 [2024-05-15 10:46:34.840004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.020 [2024-05-15 10:46:34.840213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.020 [2024-05-15 10:46:34.840234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.021 [2024-05-15 10:46:34.845163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:19.021 [2024-05-15 10:46:34.845316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.021 [2024-05-15 10:46:34.845338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.021 00:27:19.021 Latency(us) 00:27:19.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.021 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:19.021 nvme0n1 : 2.00 6518.38 814.80 0.00 0.00 2449.97 1362.46 9588.95 00:27:19.021 =================================================================================================================== 00:27:19.021 Total : 6518.38 814.80 0.00 0.00 2449.97 1362.46 9588.95 00:27:19.021 0 00:27:19.021 10:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:19.021 10:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:19.021 10:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:19.021 | .driver_specific 00:27:19.021 | .nvme_error 00:27:19.021 | .status_code 00:27:19.021 | .command_transient_transport_error' 00:27:19.021 10:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2850249 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2850249 ']' 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2850249 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2850249 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2850249' 00:27:19.282 killing process with pid 2850249 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2850249 00:27:19.282 Received shutdown signal, test time was about 2.000000 seconds 00:27:19.282 00:27:19.282 Latency(us) 00:27:19.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.282 =================================================================================================================== 00:27:19.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.282 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2850249 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2847821 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2847821 ']' 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2847821 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2847821 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2847821' 00:27:19.853 killing process with pid 2847821 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2847821 00:27:19.853 [2024-05-15 10:46:35.468162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:19.853 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2847821 00:27:20.111 00:27:20.111 real 0m17.070s 00:27:20.111 user 0m32.592s 00:27:20.111 sys 0m3.489s 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.111 ************************************ 00:27:20.111 END TEST nvmf_digest_error 00:27:20.111 ************************************ 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:20.111 10:46:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:20.111 rmmod nvme_tcp 00:27:20.381 rmmod nvme_fabrics 00:27:20.381 rmmod nvme_keyring 00:27:20.381 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:20.381 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:20.381 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:20.381 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2847821 ']' 00:27:20.381 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2847821 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 2847821 ']' 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 2847821 00:27:20.382 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2847821) - No such process 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 2847821 is not found' 00:27:20.382 Process with pid 2847821 is not found 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.382 10:46:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.312 10:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:22.312 00:27:22.312 real 1m30.588s 00:27:22.312 user 2m10.559s 00:27:22.312 sys 0m15.306s 00:27:22.312 10:46:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:22.312 10:46:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.312 ************************************ 00:27:22.312 END TEST nvmf_digest 00:27:22.312 ************************************ 00:27:22.312 10:46:38 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:27:22.312 10:46:38 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:27:22.312 10:46:38 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy-fallback == phy ]] 00:27:22.312 10:46:38 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:27:22.312 10:46:38 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:22.312 10:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.312 10:46:38 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:22.312 00:27:22.312 real 17m49.272s 00:27:22.312 user 36m51.298s 00:27:22.312 sys 4m49.663s 00:27:22.312 10:46:38 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:22.312 10:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.312 ************************************ 00:27:22.312 END TEST nvmf_tcp 00:27:22.312 ************************************ 00:27:22.312 10:46:38 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:27:22.312 10:46:38 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:22.312 10:46:38 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:22.312 10:46:38 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:22.312 10:46:38 -- common/autotest_common.sh@10 -- # set +x 00:27:22.570 ************************************ 00:27:22.570 START TEST spdkcli_nvmf_tcp 00:27:22.570 ************************************ 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:22.570 * Looking for test storage... 00:27:22.570 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2851723 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2851723 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 2851723 ']' 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.570 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:22.570 [2024-05-15 10:46:38.343510] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:22.570 [2024-05-15 10:46:38.343622] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851723 ] 00:27:22.570 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.828 [2024-05-15 10:46:38.455341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:22.828 [2024-05-15 10:46:38.550299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.828 [2024-05-15 10:46:38.550300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.395 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:23.395 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:23.395 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:23.395 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:23.395 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:23.395 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:23.395 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:23.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:23.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:23.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:23.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:23.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:23.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:23.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:23.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:23.396 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:23.396 ' 00:27:26.003 [2024-05-15 10:46:41.435274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.936 [2024-05-15 10:46:42.596568] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:26.936 [2024-05-15 10:46:42.596882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:29.463 [2024-05-15 10:46:44.727526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:30.840 [2024-05-15 10:46:46.557754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:32.213 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:32.213 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:32.213 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:32.213 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:32.213 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:32.213 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:32.213 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:32.213 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:32.213 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:32.213 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:32.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:32.213 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:32.213 10:46:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:32.213 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:32.213 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.472 10:46:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:32.472 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:32.472 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.472 10:46:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:32.472 10:46:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.730 10:46:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:32.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:32.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:32.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:32.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:32.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:32.730 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:32.730 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:32.730 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:32.730 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:32.730 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:32.730 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:32.730 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:32.730 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:32.730 ' 00:27:37.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:37.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:37.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:37.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:37.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:37.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:37.992 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:37.992 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:37.992 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:37.992 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:37.992 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:37.992 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:37.992 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:37.992 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2851723 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2851723 ']' 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2851723 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2851723 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2851723' 00:27:37.992 killing process with pid 2851723 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 2851723 00:27:37.992 [2024-05-15 10:46:53.542187] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:37.992 10:46:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 2851723 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2851723 ']' 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2851723 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2851723 ']' 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2851723 00:27:38.251 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2851723) - No such process 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 2851723 is not found' 00:27:38.251 Process with pid 2851723 is not found 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:38.251 10:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:38.251 00:27:38.251 real 0m15.814s 00:27:38.251 user 0m32.041s 00:27:38.251 sys 0m0.701s 00:27:38.252 10:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:38.252 10:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.252 ************************************ 00:27:38.252 END TEST spdkcli_nvmf_tcp 00:27:38.252 ************************************ 00:27:38.252 10:46:54 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:38.252 10:46:54 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:38.252 10:46:54 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:38.252 10:46:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.252 ************************************ 00:27:38.252 START TEST nvmf_identify_passthru 00:27:38.252 ************************************ 00:27:38.252 10:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:38.511 * Looking for test storage... 00:27:38.511 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:27:38.511 10:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:38.511 10:46:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.511 10:46:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.511 10:46:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.511 10:46:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.511 10:46:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.511 10:46:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.511 10:46:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:38.511 10:46:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.511 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.512 10:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:38.512 10:46:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.512 10:46:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.512 10:46:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.512 10:46:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.512 10:46:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.512 10:46:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.512 10:46:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:38.512 10:46:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.512 10:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.512 10:46:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:38.512 10:46:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.512 10:46:54 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.512 10:46:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:43.779 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:43.779 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:43.779 Found net devices under 0000:27:00.0: cvl_0_0 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:43.779 Found net devices under 0000:27:00.1: cvl_0_1 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.779 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.780 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.780 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.780 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.780 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:44.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:27:44.057 00:27:44.057 --- 10.0.0.2 ping statistics --- 00:27:44.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.057 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:27:44.057 00:27:44.057 --- 10.0.0.1 ping statistics --- 00:27:44.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.057 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:44.057 10:46:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:44.057 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.057 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:44.057 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:27:44.315 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:27:44.315 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:27:44.315 10:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:03:00.0 00:27:44.315 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:03:00.0 00:27:44.315 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:03:00.0 ']' 00:27:44.315 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:27:44.315 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:44.315 10:46:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:44.315 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.693 10:47:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=233442AA2262 00:27:45.693 10:47:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:27:45.693 10:47:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:45.693 10:47:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:45.693 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=Micron_7450_MTFDKBA960TFR 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2858767 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2858767 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 2858767 ']' 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:46.630 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:46.630 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:46.891 [2024-05-15 10:47:02.571360] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:27:46.891 [2024-05-15 10:47:02.571480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.891 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.891 [2024-05-15 10:47:02.696049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.149 [2024-05-15 10:47:02.792651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.149 [2024-05-15 10:47:02.792692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.149 [2024-05-15 10:47:02.792702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.149 [2024-05-15 10:47:02.792711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.149 [2024-05-15 10:47:02.792719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.149 [2024-05-15 10:47:02.792799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.149 [2024-05-15 10:47:02.792906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.149 [2024-05-15 10:47:02.793013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.149 [2024-05-15 10:47:02.793026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.407 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:47.692 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:27:47.692 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.693 INFO: Log level set to 20 00:27:47.693 INFO: Requests: 00:27:47.693 { 00:27:47.693 "jsonrpc": "2.0", 00:27:47.693 "method": "nvmf_set_config", 00:27:47.693 "id": 1, 00:27:47.693 "params": { 00:27:47.693 "admin_cmd_passthru": { 00:27:47.693 "identify_ctrlr": true 00:27:47.693 } 00:27:47.693 } 00:27:47.693 } 00:27:47.693 00:27:47.693 INFO: response: 00:27:47.693 { 00:27:47.693 "jsonrpc": "2.0", 00:27:47.693 "id": 1, 00:27:47.693 "result": true 00:27:47.693 } 00:27:47.693 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.693 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.693 INFO: Setting log level to 20 00:27:47.693 INFO: Setting log level to 20 00:27:47.693 INFO: Log level set to 20 00:27:47.693 INFO: Log level set to 20 00:27:47.693 INFO: Requests: 00:27:47.693 { 00:27:47.693 "jsonrpc": "2.0", 00:27:47.693 "method": "framework_start_init", 00:27:47.693 "id": 1 00:27:47.693 } 00:27:47.693 00:27:47.693 INFO: Requests: 00:27:47.693 { 00:27:47.693 "jsonrpc": "2.0", 00:27:47.693 "method": "framework_start_init", 00:27:47.693 "id": 1 00:27:47.693 } 00:27:47.693 00:27:47.693 [2024-05-15 10:47:03.432700] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:47.693 INFO: response: 00:27:47.693 { 00:27:47.693 "jsonrpc": "2.0", 00:27:47.693 "id": 1, 00:27:47.693 "result": true 00:27:47.693 } 00:27:47.693 00:27:47.693 INFO: response: 00:27:47.693 { 00:27:47.693 "jsonrpc": "2.0", 00:27:47.693 "id": 1, 00:27:47.693 "result": true 00:27:47.693 } 00:27:47.693 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.693 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.693 INFO: Setting log level to 40 00:27:47.693 INFO: Setting log level to 40 00:27:47.693 INFO: Setting log level to 40 00:27:47.693 [2024-05-15 10:47:03.446909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.693 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.693 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.693 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.260 Nvme0n1 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.260 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.260 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.260 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.260 [2024-05-15 10:47:03.886100] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:48.260 [2024-05-15 10:47:03.886388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.260 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.260 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.260 [ 00:27:48.260 { 00:27:48.260 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:48.260 "subtype": "Discovery", 00:27:48.260 "listen_addresses": [], 00:27:48.260 "allow_any_host": true, 00:27:48.260 "hosts": [] 00:27:48.260 }, 00:27:48.260 { 00:27:48.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.260 "subtype": "NVMe", 00:27:48.260 "listen_addresses": [ 00:27:48.260 { 00:27:48.260 "trtype": "TCP", 00:27:48.260 "adrfam": "IPv4", 00:27:48.260 "traddr": "10.0.0.2", 00:27:48.260 "trsvcid": "4420" 00:27:48.260 } 00:27:48.260 ], 00:27:48.260 "allow_any_host": true, 00:27:48.260 "hosts": [], 00:27:48.260 "serial_number": "SPDK00000000000001", 00:27:48.260 "model_number": "SPDK bdev Controller", 00:27:48.260 "max_namespaces": 1, 00:27:48.260 "min_cntlid": 1, 00:27:48.260 "max_cntlid": 65519, 00:27:48.261 "namespaces": [ 00:27:48.261 { 00:27:48.261 "nsid": 1, 00:27:48.261 "bdev_name": "Nvme0n1", 00:27:48.261 "name": "Nvme0n1", 00:27:48.261 "nguid": "000000000000000100A0752342AA2262", 00:27:48.261 "uuid": "00000000-0000-0001-00a0-752342aa2262" 00:27:48.261 } 00:27:48.261 ] 00:27:48.261 } 00:27:48.261 ] 00:27:48.261 10:47:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.261 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:48.261 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:48.261 10:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:48.261 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.519 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=233442AA2262 00:27:48.519 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:48.519 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:48.519 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:48.519 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.778 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=Micron_7450_MTFDKBA960TFR 00:27:48.778 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 233442AA2262 '!=' 233442AA2262 ']' 00:27:48.778 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' Micron_7450_MTFDKBA960TFR '!=' Micron_7450_MTFDKBA960TFR ']' 00:27:48.778 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.778 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.778 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.778 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.778 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:48.778 10:47:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:48.778 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.778 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:48.778 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.778 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:48.778 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.779 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.779 rmmod nvme_tcp 00:27:48.779 rmmod nvme_fabrics 00:27:48.779 rmmod nvme_keyring 00:27:48.779 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.779 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:48.779 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:48.779 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2858767 ']' 00:27:48.779 10:47:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2858767 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 2858767 ']' 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 2858767 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2858767 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2858767' 00:27:48.779 killing process with pid 2858767 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 2858767 00:27:48.779 [2024-05-15 10:47:04.649470] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:48.779 10:47:04 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 2858767 00:27:50.155 10:47:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:50.155 10:47:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:50.155 10:47:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:50.155 10:47:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.155 10:47:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.155 10:47:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.155 10:47:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:50.155 10:47:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.061 10:47:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.062 00:27:52.062 real 0m13.835s 00:27:52.062 user 0m14.908s 00:27:52.062 sys 0m5.126s 00:27:52.062 10:47:07 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:52.062 10:47:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.062 ************************************ 00:27:52.062 END TEST nvmf_identify_passthru 00:27:52.062 ************************************ 00:27:52.062 10:47:07 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:52.062 10:47:07 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:52.062 10:47:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:52.062 10:47:07 -- common/autotest_common.sh@10 -- # set +x 00:27:52.322 ************************************ 00:27:52.322 START TEST nvmf_dif 00:27:52.322 ************************************ 00:27:52.322 10:47:07 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:52.322 * Looking for test storage... 00:27:52.322 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:27:52.322 10:47:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:52.322 10:47:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.322 10:47:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.322 10:47:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.322 10:47:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.322 10:47:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.322 10:47:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.322 10:47:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:52.322 10:47:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.322 10:47:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:52.322 10:47:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:52.322 10:47:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:52.322 10:47:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:52.322 10:47:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.322 10:47:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.322 10:47:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.322 10:47:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.322 10:47:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:57.602 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:57.602 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:57.602 Found net devices under 0000:27:00.0: cvl_0_0 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:57.602 Found net devices under 0000:27:00.1: cvl_0_1 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.602 10:47:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.603 10:47:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.603 10:47:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.603 10:47:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.603 10:47:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.603 10:47:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.603 10:47:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:27:57.861 00:27:57.861 --- 10.0.0.2 ping statistics --- 00:27:57.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.861 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:27:57.861 00:27:57.861 --- 10.0.0.1 ping statistics --- 00:27:57.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.861 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:57.861 10:47:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:57.862 10:47:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:00.398 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:28:00.398 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:00.398 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:00.398 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.398 10:47:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:00.398 10:47:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2864716 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2864716 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 2864716 ']' 00:28:00.398 10:47:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:00.398 10:47:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:00.398 [2024-05-15 10:47:16.130847] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:28:00.398 [2024-05-15 10:47:16.130910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.398 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.398 [2024-05-15 10:47:16.218957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.659 [2024-05-15 10:47:16.310646] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.659 [2024-05-15 10:47:16.310682] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.659 [2024-05-15 10:47:16.310692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.659 [2024-05-15 10:47:16.310701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.659 [2024-05-15 10:47:16.310708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.659 [2024-05-15 10:47:16.310743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:28:01.229 10:47:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 10:47:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.229 10:47:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:01.229 10:47:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 [2024-05-15 10:47:16.873394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.229 10:47:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 ************************************ 00:28:01.229 START TEST fio_dif_1_default 00:28:01.229 ************************************ 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 bdev_null0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.229 [2024-05-15 10:47:16.941367] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:01.229 [2024-05-15 10:47:16.941623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:01.229 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.230 { 00:28:01.230 "params": { 00:28:01.230 "name": "Nvme$subsystem", 00:28:01.230 "trtype": "$TEST_TRANSPORT", 00:28:01.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.230 "adrfam": "ipv4", 00:28:01.230 "trsvcid": "$NVMF_PORT", 00:28:01.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.230 "hdgst": ${hdgst:-false}, 00:28:01.230 "ddgst": ${ddgst:-false} 00:28:01.230 }, 00:28:01.230 "method": "bdev_nvme_attach_controller" 00:28:01.230 } 00:28:01.230 EOF 00:28:01.230 )") 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:01.230 "params": { 00:28:01.230 "name": "Nvme0", 00:28:01.230 "trtype": "tcp", 00:28:01.230 "traddr": "10.0.0.2", 00:28:01.230 "adrfam": "ipv4", 00:28:01.230 "trsvcid": "4420", 00:28:01.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.230 "hdgst": false, 00:28:01.230 "ddgst": false 00:28:01.230 }, 00:28:01.230 "method": "bdev_nvme_attach_controller" 00:28:01.230 }' 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # break 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:01.230 10:47:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.796 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:01.796 fio-3.35 00:28:01.796 Starting 1 thread 00:28:01.796 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.992 00:28:13.992 filename0: (groupid=0, jobs=1): err= 0: pid=2865191: Wed May 15 10:47:28 2024 00:28:13.992 read: IOPS=190, BW=760KiB/s (779kB/s)(7632KiB/10038msec) 00:28:13.992 slat (nsec): min=6013, max=33089, avg=6815.26, stdev=1485.40 00:28:13.992 clat (usec): min=488, max=41889, avg=21025.45, stdev=20401.67 00:28:13.992 lat (usec): min=494, max=41922, avg=21032.26, stdev=20401.44 00:28:13.992 clat percentiles (usec): 00:28:13.992 | 1.00th=[ 537], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:28:13.992 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[41157], 60.00th=[41157], 00:28:13.992 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:28:13.992 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:28:13.992 | 99.99th=[41681] 00:28:13.992 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.60, stdev=19.70, samples=20 00:28:13.992 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:28:13.992 lat (usec) : 500=0.21%, 750=49.48%, 1000=0.21% 00:28:13.992 lat (msec) : 50=50.10% 00:28:13.992 cpu : usr=95.75%, sys=3.98%, ctx=13, majf=0, minf=1634 00:28:13.992 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.992 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.992 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:13.992 00:28:13.992 Run status group 0 (all jobs): 00:28:13.992 READ: bw=760KiB/s (779kB/s), 760KiB/s-760KiB/s (779kB/s-779kB/s), io=7632KiB (7815kB), run=10038-10038msec 00:28:13.992 ----------------------------------------------------- 00:28:13.992 Suppressions used: 00:28:13.992 count bytes template 00:28:13.992 1 8 /usr/src/fio/parse.c 00:28:13.992 1 8 libtcmalloc_minimal.so 00:28:13.992 1 904 libcrypto.so 00:28:13.992 ----------------------------------------------------- 00:28:13.992 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 00:28:13.992 real 0m11.899s 00:28:13.992 user 0m30.257s 00:28:13.992 sys 0m0.824s 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 ************************************ 00:28:13.992 END TEST fio_dif_1_default 00:28:13.992 ************************************ 00:28:13.992 10:47:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:13.992 10:47:28 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:13.992 10:47:28 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 ************************************ 00:28:13.992 START TEST fio_dif_1_multi_subsystems 00:28:13.992 ************************************ 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 bdev_null0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 [2024-05-15 10:47:28.899915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 bdev_null1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:13.992 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.993 { 00:28:13.993 "params": { 00:28:13.993 "name": "Nvme$subsystem", 00:28:13.993 "trtype": "$TEST_TRANSPORT", 00:28:13.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.993 "adrfam": "ipv4", 00:28:13.993 "trsvcid": "$NVMF_PORT", 00:28:13.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.993 "hdgst": ${hdgst:-false}, 00:28:13.993 "ddgst": ${ddgst:-false} 00:28:13.993 }, 00:28:13.993 "method": "bdev_nvme_attach_controller" 00:28:13.993 } 00:28:13.993 EOF 00:28:13.993 )") 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.993 { 00:28:13.993 "params": { 00:28:13.993 "name": "Nvme$subsystem", 00:28:13.993 "trtype": "$TEST_TRANSPORT", 00:28:13.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.993 "adrfam": "ipv4", 00:28:13.993 "trsvcid": "$NVMF_PORT", 00:28:13.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.993 "hdgst": ${hdgst:-false}, 00:28:13.993 "ddgst": ${ddgst:-false} 00:28:13.993 }, 00:28:13.993 "method": "bdev_nvme_attach_controller" 00:28:13.993 } 00:28:13.993 EOF 00:28:13.993 )") 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:13.993 "params": { 00:28:13.993 "name": "Nvme0", 00:28:13.993 "trtype": "tcp", 00:28:13.993 "traddr": "10.0.0.2", 00:28:13.993 "adrfam": "ipv4", 00:28:13.993 "trsvcid": "4420", 00:28:13.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.993 "hdgst": false, 00:28:13.993 "ddgst": false 00:28:13.993 }, 00:28:13.993 "method": "bdev_nvme_attach_controller" 00:28:13.993 },{ 00:28:13.993 "params": { 00:28:13.993 "name": "Nvme1", 00:28:13.993 "trtype": "tcp", 00:28:13.993 "traddr": "10.0.0.2", 00:28:13.993 "adrfam": "ipv4", 00:28:13.993 "trsvcid": "4420", 00:28:13.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.993 "hdgst": false, 00:28:13.993 "ddgst": false 00:28:13.993 }, 00:28:13.993 "method": "bdev_nvme_attach_controller" 00:28:13.993 }' 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # break 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:13.993 10:47:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.993 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:13.993 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:13.993 fio-3.35 00:28:13.993 Starting 2 threads 00:28:13.993 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.255 00:28:26.255 filename0: (groupid=0, jobs=1): err= 0: pid=2867705: Wed May 15 10:47:40 2024 00:28:26.255 read: IOPS=187, BW=750KiB/s (768kB/s)(7520KiB/10031msec) 00:28:26.255 slat (nsec): min=6039, max=35260, avg=6960.17, stdev=1742.88 00:28:26.255 clat (usec): min=509, max=41874, avg=21323.12, stdev=20372.42 00:28:26.255 lat (usec): min=515, max=41909, avg=21330.08, stdev=20372.06 00:28:26.255 clat percentiles (usec): 00:28:26.255 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 594], 00:28:26.255 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[41157], 60.00th=[41157], 00:28:26.255 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:26.255 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:28:26.255 | 99.99th=[41681] 00:28:26.255 bw ( KiB/s): min= 672, max= 768, per=49.74%, avg=750.40, stdev=30.22, samples=20 00:28:26.255 iops : min= 168, max= 192, avg=187.60, stdev= 7.56, samples=20 00:28:26.255 lat (usec) : 750=49.15% 00:28:26.255 lat (msec) : 50=50.85% 00:28:26.255 cpu : usr=98.23%, sys=1.50%, ctx=13, majf=0, minf=1636 00:28:26.255 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.255 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.255 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:26.255 filename1: (groupid=0, jobs=1): err= 0: pid=2867706: Wed May 15 10:47:40 2024 00:28:26.255 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10038msec) 00:28:26.255 slat (nsec): min=6042, max=31093, avg=6897.94, stdev=1577.39 00:28:26.255 clat (usec): min=456, max=42280, avg=21067.76, stdev=20374.87 00:28:26.255 lat (usec): min=462, max=42312, avg=21074.66, stdev=20374.57 00:28:26.255 clat percentiles (usec): 00:28:26.255 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 603], 00:28:26.255 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[41157], 60.00th=[41157], 00:28:26.255 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:26.255 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:28:26.255 | 99.99th=[42206] 00:28:26.255 bw ( KiB/s): min= 673, max= 768, per=50.40%, avg=760.05, stdev=24.98, samples=20 00:28:26.255 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:28:26.255 lat (usec) : 500=0.42%, 750=48.69%, 1000=0.68% 00:28:26.255 lat (msec) : 50=50.21% 00:28:26.255 cpu : usr=97.92%, sys=1.78%, ctx=13, majf=0, minf=1632 00:28:26.255 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.255 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.255 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:26.255 00:28:26.255 Run status group 0 (all jobs): 00:28:26.255 READ: bw=1508KiB/s (1544kB/s), 750KiB/s-759KiB/s (768kB/s-777kB/s), io=14.8MiB (15.5MB), run=10031-10038msec 00:28:26.255 ----------------------------------------------------- 00:28:26.255 Suppressions used: 00:28:26.255 count bytes template 00:28:26.255 2 16 /usr/src/fio/parse.c 00:28:26.255 1 8 libtcmalloc_minimal.so 00:28:26.255 1 904 libcrypto.so 00:28:26.255 ----------------------------------------------------- 00:28:26.255 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.255 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 00:28:26.256 real 0m12.055s 00:28:26.256 user 0m40.837s 00:28:26.256 sys 0m0.759s 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 ************************************ 00:28:26.256 END TEST fio_dif_1_multi_subsystems 00:28:26.256 ************************************ 00:28:26.256 10:47:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:26.256 10:47:40 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:26.256 10:47:40 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 ************************************ 00:28:26.256 START TEST fio_dif_rand_params 00:28:26.256 ************************************ 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 bdev_null0 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.256 [2024-05-15 10:47:41.010849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.256 { 00:28:26.256 "params": { 00:28:26.256 "name": "Nvme$subsystem", 00:28:26.256 "trtype": "$TEST_TRANSPORT", 00:28:26.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.256 "adrfam": "ipv4", 00:28:26.256 "trsvcid": "$NVMF_PORT", 00:28:26.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.256 "hdgst": ${hdgst:-false}, 00:28:26.256 "ddgst": ${ddgst:-false} 00:28:26.256 }, 00:28:26.256 "method": "bdev_nvme_attach_controller" 00:28:26.256 } 00:28:26.256 EOF 00:28:26.256 )") 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.256 "params": { 00:28:26.256 "name": "Nvme0", 00:28:26.256 "trtype": "tcp", 00:28:26.256 "traddr": "10.0.0.2", 00:28:26.256 "adrfam": "ipv4", 00:28:26.256 "trsvcid": "4420", 00:28:26.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.256 "hdgst": false, 00:28:26.256 "ddgst": false 00:28:26.256 }, 00:28:26.256 "method": "bdev_nvme_attach_controller" 00:28:26.256 }' 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # break 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.256 10:47:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.256 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:26.256 ... 00:28:26.256 fio-3.35 00:28:26.256 Starting 3 threads 00:28:26.256 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.523 00:28:31.523 filename0: (groupid=0, jobs=1): err= 0: pid=2870214: Wed May 15 10:47:47 2024 00:28:31.523 read: IOPS=257, BW=32.2MiB/s (33.7MB/s)(162MiB/5045msec) 00:28:31.523 slat (nsec): min=4567, max=22055, avg=7416.01, stdev=1236.45 00:28:31.523 clat (usec): min=3305, max=90916, avg=11617.60, stdev=13120.26 00:28:31.523 lat (usec): min=3311, max=90923, avg=11625.02, stdev=13120.32 00:28:31.523 clat percentiles (usec): 00:28:31.523 | 1.00th=[ 3654], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5735], 00:28:31.523 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7963], 00:28:31.523 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[45351], 95.00th=[47449], 00:28:31.523 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[90702], 00:28:31.523 | 99.99th=[90702] 00:28:31.523 bw ( KiB/s): min=19968, max=44800, per=34.06%, avg=33177.60, stdev=8677.36, samples=10 00:28:31.523 iops : min= 156, max= 350, avg=259.20, stdev=67.79, samples=10 00:28:31.523 lat (msec) : 4=3.62%, 10=80.20%, 20=4.78%, 50=9.94%, 100=1.46% 00:28:31.523 cpu : usr=97.36%, sys=2.38%, ctx=7, majf=0, minf=1639 00:28:31.523 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.523 issued rwts: total=1298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.523 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.523 filename0: (groupid=0, jobs=1): err= 0: pid=2870215: Wed May 15 10:47:47 2024 00:28:31.523 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(153MiB/5043msec) 00:28:31.523 slat (nsec): min=6053, max=25360, avg=7587.26, stdev=1565.81 00:28:31.523 clat (usec): min=3038, max=51634, avg=12367.92, stdev=13868.74 00:28:31.523 lat (usec): min=3045, max=51641, avg=12375.51, stdev=13868.85 00:28:31.523 clat percentiles (usec): 00:28:31.523 | 1.00th=[ 3621], 5.00th=[ 3949], 10.00th=[ 4293], 20.00th=[ 5604], 00:28:31.523 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 8160], 00:28:31.523 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[45876], 95.00th=[47449], 00:28:31.523 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:28:31.523 | 99.99th=[51643] 00:28:31.523 bw ( KiB/s): min=17920, max=49920, per=32.04%, avg=31206.40, stdev=11333.89, samples=10 00:28:31.523 iops : min= 140, max= 390, avg=243.80, stdev=88.55, samples=10 00:28:31.523 lat (msec) : 4=5.81%, 10=74.47%, 20=6.46%, 50=11.13%, 100=2.13% 00:28:31.523 cpu : usr=97.24%, sys=2.50%, ctx=7, majf=0, minf=1634 00:28:31.523 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.523 issued rwts: total=1222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.523 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.523 filename0: (groupid=0, jobs=1): err= 0: pid=2870216: Wed May 15 10:47:47 2024 00:28:31.523 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(165MiB/5003msec) 00:28:31.523 slat (nsec): min=6040, max=25843, avg=7742.29, stdev=1774.68 00:28:31.523 clat (usec): min=2845, max=54499, avg=11368.25, stdev=13270.63 00:28:31.523 lat (usec): min=2851, max=54525, avg=11375.99, stdev=13270.81 00:28:31.523 clat percentiles (usec): 00:28:31.523 | 1.00th=[ 3359], 5.00th=[ 3654], 10.00th=[ 3851], 20.00th=[ 4686], 00:28:31.523 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 7504], 00:28:31.523 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[45351], 95.00th=[49021], 00:28:31.523 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:28:31.523 | 99.99th=[54264] 00:28:31.523 bw ( KiB/s): min=19456, max=52480, per=34.62%, avg=33720.40, stdev=10882.62, samples=10 00:28:31.523 iops : min= 152, max= 410, avg=263.40, stdev=85.05, samples=10 00:28:31.523 lat (msec) : 4=12.43%, 10=67.85%, 20=8.57%, 50=8.04%, 100=3.11% 00:28:31.523 cpu : usr=97.38%, sys=2.34%, ctx=5, majf=0, minf=1637 00:28:31.523 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.524 issued rwts: total=1319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.524 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.524 00:28:31.524 Run status group 0 (all jobs): 00:28:31.524 READ: bw=95.1MiB/s (99.7MB/s), 30.3MiB/s-33.0MiB/s (31.8MB/s-34.6MB/s), io=480MiB (503MB), run=5003-5045msec 00:28:32.091 ----------------------------------------------------- 00:28:32.091 Suppressions used: 00:28:32.091 count bytes template 00:28:32.091 5 44 /usr/src/fio/parse.c 00:28:32.091 1 8 libtcmalloc_minimal.so 00:28:32.091 1 904 libcrypto.so 00:28:32.091 ----------------------------------------------------- 00:28:32.091 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 bdev_null0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 [2024-05-15 10:47:47.857608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 bdev_null1 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 bdev_null2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:32.091 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.091 { 00:28:32.091 "params": { 00:28:32.091 "name": "Nvme$subsystem", 00:28:32.091 "trtype": "$TEST_TRANSPORT", 00:28:32.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.091 "adrfam": "ipv4", 00:28:32.091 "trsvcid": "$NVMF_PORT", 00:28:32.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.091 "hdgst": ${hdgst:-false}, 00:28:32.091 "ddgst": ${ddgst:-false} 00:28:32.091 }, 00:28:32.091 "method": "bdev_nvme_attach_controller" 00:28:32.091 } 00:28:32.091 EOF 00:28:32.092 )") 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.092 { 00:28:32.092 "params": { 00:28:32.092 "name": "Nvme$subsystem", 00:28:32.092 "trtype": "$TEST_TRANSPORT", 00:28:32.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.092 "adrfam": "ipv4", 00:28:32.092 "trsvcid": "$NVMF_PORT", 00:28:32.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.092 "hdgst": ${hdgst:-false}, 00:28:32.092 "ddgst": ${ddgst:-false} 00:28:32.092 }, 00:28:32.092 "method": "bdev_nvme_attach_controller" 00:28:32.092 } 00:28:32.092 EOF 00:28:32.092 )") 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.092 { 00:28:32.092 "params": { 00:28:32.092 "name": "Nvme$subsystem", 00:28:32.092 "trtype": "$TEST_TRANSPORT", 00:28:32.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.092 "adrfam": "ipv4", 00:28:32.092 "trsvcid": "$NVMF_PORT", 00:28:32.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.092 "hdgst": ${hdgst:-false}, 00:28:32.092 "ddgst": ${ddgst:-false} 00:28:32.092 }, 00:28:32.092 "method": "bdev_nvme_attach_controller" 00:28:32.092 } 00:28:32.092 EOF 00:28:32.092 )") 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:32.092 "params": { 00:28:32.092 "name": "Nvme0", 00:28:32.092 "trtype": "tcp", 00:28:32.092 "traddr": "10.0.0.2", 00:28:32.092 "adrfam": "ipv4", 00:28:32.092 "trsvcid": "4420", 00:28:32.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:32.092 "hdgst": false, 00:28:32.092 "ddgst": false 00:28:32.092 }, 00:28:32.092 "method": "bdev_nvme_attach_controller" 00:28:32.092 },{ 00:28:32.092 "params": { 00:28:32.092 "name": "Nvme1", 00:28:32.092 "trtype": "tcp", 00:28:32.092 "traddr": "10.0.0.2", 00:28:32.092 "adrfam": "ipv4", 00:28:32.092 "trsvcid": "4420", 00:28:32.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:32.092 "hdgst": false, 00:28:32.092 "ddgst": false 00:28:32.092 }, 00:28:32.092 "method": "bdev_nvme_attach_controller" 00:28:32.092 },{ 00:28:32.092 "params": { 00:28:32.092 "name": "Nvme2", 00:28:32.092 "trtype": "tcp", 00:28:32.092 "traddr": "10.0.0.2", 00:28:32.092 "adrfam": "ipv4", 00:28:32.092 "trsvcid": "4420", 00:28:32.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:32.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:32.092 "hdgst": false, 00:28:32.092 "ddgst": false 00:28:32.092 }, 00:28:32.092 "method": "bdev_nvme_attach_controller" 00:28:32.092 }' 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # break 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:32.092 10:47:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.672 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.672 ... 00:28:32.672 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.672 ... 00:28:32.672 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.672 ... 00:28:32.672 fio-3.35 00:28:32.672 Starting 24 threads 00:28:32.672 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.873 00:28:44.873 filename0: (groupid=0, jobs=1): err= 0: pid=2871829: Wed May 15 10:47:59 2024 00:28:44.873 read: IOPS=506, BW=2027KiB/s (2076kB/s)(19.8MiB/10008msec) 00:28:44.873 slat (usec): min=6, max=116, avg=28.67, stdev=20.01 00:28:44.873 clat (usec): min=17082, max=84233, avg=31354.49, stdev=3174.94 00:28:44.873 lat (usec): min=17105, max=84260, avg=31383.16, stdev=3173.82 00:28:44.873 clat percentiles (usec): 00:28:44.873 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.873 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.873 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.873 | 99.00th=[32900], 99.50th=[33162], 99.90th=[84411], 99.95th=[84411], 00:28:44.873 | 99.99th=[84411] 00:28:44.873 bw ( KiB/s): min= 1792, max= 2176, per=4.15%, avg=2021.05, stdev=80.72, samples=19 00:28:44.873 iops : min= 448, max= 544, avg=505.26, stdev=20.18, samples=19 00:28:44.873 lat (msec) : 20=0.16%, 50=99.53%, 100=0.32% 00:28:44.873 cpu : usr=98.53%, sys=0.90%, ctx=214, majf=0, minf=1633 00:28:44.873 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:44.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.873 filename0: (groupid=0, jobs=1): err= 0: pid=2871830: Wed May 15 10:47:59 2024 00:28:44.873 read: IOPS=514, BW=2060KiB/s (2109kB/s)(20.1MiB/10006msec) 00:28:44.873 slat (nsec): min=6402, max=95004, avg=25955.77, stdev=18066.80 00:28:44.873 clat (usec): min=3531, max=42704, avg=30857.95, stdev=3141.45 00:28:44.873 lat (usec): min=3544, max=42720, avg=30883.91, stdev=3142.54 00:28:44.873 clat percentiles (usec): 00:28:44.873 | 1.00th=[ 9110], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.873 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.873 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.873 | 99.00th=[32637], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:28:44.873 | 99.99th=[42730] 00:28:44.873 bw ( KiB/s): min= 1920, max= 2432, per=4.24%, avg=2061.47, stdev=103.59, samples=19 00:28:44.873 iops : min= 480, max= 608, avg=515.37, stdev=25.90, samples=19 00:28:44.873 lat (msec) : 4=0.49%, 10=0.72%, 20=0.31%, 50=98.49% 00:28:44.873 cpu : usr=98.83%, sys=0.75%, ctx=13, majf=0, minf=1636 00:28:44.873 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:44.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.873 filename0: (groupid=0, jobs=1): err= 0: pid=2871831: Wed May 15 10:47:59 2024 00:28:44.873 read: IOPS=504, BW=2019KiB/s (2068kB/s)(19.8MiB/10052msec) 00:28:44.873 slat (nsec): min=5921, max=96048, avg=28440.99, stdev=18003.10 00:28:44.873 clat (usec): min=24208, max=70024, avg=31287.90, stdev=2345.77 00:28:44.873 lat (usec): min=24218, max=70053, avg=31316.34, stdev=2345.26 00:28:44.873 clat percentiles (usec): 00:28:44.873 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.873 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.873 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:28:44.873 | 99.00th=[32900], 99.50th=[33817], 99.90th=[69731], 99.95th=[69731], 00:28:44.873 | 99.99th=[69731] 00:28:44.873 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2028.80, stdev=75.15, samples=20 00:28:44.873 iops : min= 448, max= 544, avg=507.20, stdev=18.79, samples=20 00:28:44.873 lat (msec) : 50=99.63%, 100=0.37% 00:28:44.873 cpu : usr=99.00%, sys=0.59%, ctx=11, majf=0, minf=1635 00:28:44.873 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 issued rwts: total=5075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.873 filename0: (groupid=0, jobs=1): err= 0: pid=2871832: Wed May 15 10:47:59 2024 00:28:44.873 read: IOPS=508, BW=2033KiB/s (2081kB/s)(19.9MiB/10013msec) 00:28:44.873 slat (usec): min=5, max=127, avg=23.21, stdev=20.94 00:28:44.873 clat (usec): min=20753, max=58799, avg=31321.10, stdev=1822.49 00:28:44.873 lat (usec): min=20801, max=58827, avg=31344.32, stdev=1820.42 00:28:44.873 clat percentiles (usec): 00:28:44.873 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.873 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.873 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.873 | 99.00th=[32900], 99.50th=[33424], 99.90th=[58983], 99.95th=[58983], 00:28:44.873 | 99.99th=[58983] 00:28:44.873 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2034.53, stdev=58.73, samples=19 00:28:44.873 iops : min= 480, max= 544, avg=508.63, stdev=14.68, samples=19 00:28:44.873 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.873 cpu : usr=99.02%, sys=0.59%, ctx=14, majf=0, minf=1634 00:28:44.873 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.873 filename0: (groupid=0, jobs=1): err= 0: pid=2871833: Wed May 15 10:47:59 2024 00:28:44.873 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10005msec) 00:28:44.873 slat (usec): min=4, max=122, avg=43.53, stdev=22.65 00:28:44.873 clat (usec): min=20968, max=50760, avg=31076.86, stdev=1419.50 00:28:44.873 lat (usec): min=20998, max=50786, avg=31120.39, stdev=1419.54 00:28:44.873 clat percentiles (usec): 00:28:44.873 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.873 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:44.873 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.873 | 99.00th=[32900], 99.50th=[33424], 99.90th=[50594], 99.95th=[50594], 00:28:44.873 | 99.99th=[50594] 00:28:44.873 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2034.53, stdev=58.73, samples=19 00:28:44.873 iops : min= 480, max= 544, avg=508.63, stdev=14.68, samples=19 00:28:44.873 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.873 cpu : usr=98.82%, sys=0.72%, ctx=54, majf=0, minf=1634 00:28:44.873 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.873 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.873 filename0: (groupid=0, jobs=1): err= 0: pid=2871834: Wed May 15 10:47:59 2024 00:28:44.873 read: IOPS=519, BW=2078KiB/s (2128kB/s)(20.3MiB/10010msec) 00:28:44.873 slat (nsec): min=5848, max=99028, avg=22605.48, stdev=19237.19 00:28:44.873 clat (usec): min=2399, max=42572, avg=30612.60, stdev=4186.79 00:28:44.873 lat (usec): min=2410, max=42613, avg=30635.21, stdev=4187.76 00:28:44.873 clat percentiles (usec): 00:28:44.873 | 1.00th=[ 4228], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.873 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.873 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.874 | 99.00th=[32637], 99.50th=[33817], 99.90th=[42206], 99.95th=[42730], 00:28:44.874 | 99.99th=[42730] 00:28:44.874 bw ( KiB/s): min= 1920, max= 2816, per=4.28%, avg=2081.68, stdev=185.21, samples=19 00:28:44.874 iops : min= 480, max= 704, avg=520.42, stdev=46.30, samples=19 00:28:44.874 lat (msec) : 4=0.96%, 10=1.46%, 50=97.58% 00:28:44.874 cpu : usr=98.76%, sys=0.78%, ctx=39, majf=0, minf=1637 00:28:44.874 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename0: (groupid=0, jobs=1): err= 0: pid=2871835: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10009msec) 00:28:44.874 slat (usec): min=5, max=134, avg=40.69, stdev=24.66 00:28:44.874 clat (usec): min=20891, max=54694, avg=31152.76, stdev=1600.30 00:28:44.874 lat (usec): min=20904, max=54721, avg=31193.44, stdev=1598.80 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.874 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.874 | 70.00th=[31589], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.874 | 99.00th=[32637], 99.50th=[33424], 99.90th=[54789], 99.95th=[54789], 00:28:44.874 | 99.99th=[54789] 00:28:44.874 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2034.53, stdev=58.73, samples=19 00:28:44.874 iops : min= 480, max= 544, avg=508.63, stdev=14.68, samples=19 00:28:44.874 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.874 cpu : usr=98.93%, sys=0.61%, ctx=27, majf=0, minf=1637 00:28:44.874 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename0: (groupid=0, jobs=1): err= 0: pid=2871836: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=507, BW=2029KiB/s (2077kB/s)(19.8MiB/10017msec) 00:28:44.874 slat (usec): min=6, max=136, avg=28.36, stdev=19.10 00:28:44.874 clat (usec): min=16899, max=70085, avg=31260.14, stdev=2464.60 00:28:44.874 lat (usec): min=16918, max=70111, avg=31288.50, stdev=2464.56 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.874 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.874 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:28:44.874 | 99.00th=[33162], 99.50th=[39060], 99.90th=[69731], 99.95th=[69731], 00:28:44.874 | 99.99th=[69731] 00:28:44.874 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2028.80, stdev=75.15, samples=20 00:28:44.874 iops : min= 448, max= 544, avg=507.20, stdev=18.79, samples=20 00:28:44.874 lat (msec) : 20=0.16%, 50=99.53%, 100=0.31% 00:28:44.874 cpu : usr=98.50%, sys=0.80%, ctx=47, majf=0, minf=1634 00:28:44.874 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename1: (groupid=0, jobs=1): err= 0: pid=2871837: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=506, BW=2028KiB/s (2076kB/s)(19.8MiB/10005msec) 00:28:44.874 slat (usec): min=6, max=113, avg=40.99, stdev=21.21 00:28:44.874 clat (usec): min=18300, max=83842, avg=31208.08, stdev=2806.48 00:28:44.874 lat (usec): min=18307, max=83870, avg=31249.07, stdev=2806.02 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.874 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.874 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.874 | 99.00th=[33424], 99.50th=[39060], 99.90th=[74974], 99.95th=[74974], 00:28:44.874 | 99.99th=[83362] 00:28:44.874 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2027.79, stdev=75.77, samples=19 00:28:44.874 iops : min= 448, max= 544, avg=506.95, stdev=18.94, samples=19 00:28:44.874 lat (msec) : 20=0.20%, 50=99.49%, 100=0.32% 00:28:44.874 cpu : usr=98.93%, sys=0.63%, ctx=15, majf=0, minf=1634 00:28:44.874 IO depths : 1=4.3%, 2=10.5%, 4=24.9%, 8=52.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename1: (groupid=0, jobs=1): err= 0: pid=2871838: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=517, BW=2072KiB/s (2121kB/s)(20.2MiB/10009msec) 00:28:44.874 slat (nsec): min=5817, max=94209, avg=12572.99, stdev=10817.69 00:28:44.874 clat (usec): min=3142, max=42651, avg=30786.25, stdev=3882.12 00:28:44.874 lat (usec): min=3154, max=42670, avg=30798.83, stdev=3881.59 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[ 5538], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.874 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.874 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:44.874 | 99.00th=[32637], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:28:44.874 | 99.99th=[42730] 00:28:44.874 bw ( KiB/s): min= 1920, max= 2688, per=4.26%, avg=2074.95, stdev=157.23, samples=19 00:28:44.874 iops : min= 480, max= 672, avg=518.74, stdev=39.31, samples=19 00:28:44.874 lat (msec) : 4=0.79%, 10=1.33%, 20=0.04%, 50=97.84% 00:28:44.874 cpu : usr=98.94%, sys=0.62%, ctx=14, majf=0, minf=1635 00:28:44.874 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename1: (groupid=0, jobs=1): err= 0: pid=2871839: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10010msec) 00:28:44.874 slat (usec): min=6, max=138, avg=47.80, stdev=28.45 00:28:44.874 clat (usec): min=22106, max=51853, avg=31000.38, stdev=1428.56 00:28:44.874 lat (usec): min=22117, max=51882, avg=31048.18, stdev=1430.09 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:28:44.874 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:44.874 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.874 | 99.00th=[32637], 99.50th=[33424], 99.90th=[51643], 99.95th=[51643], 00:28:44.874 | 99.99th=[51643] 00:28:44.874 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2027.95, stdev=63.91, samples=19 00:28:44.874 iops : min= 480, max= 544, avg=506.95, stdev=16.05, samples=19 00:28:44.874 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.874 cpu : usr=99.05%, sys=0.50%, ctx=15, majf=0, minf=1636 00:28:44.874 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename1: (groupid=0, jobs=1): err= 0: pid=2871840: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=508, BW=2032KiB/s (2081kB/s)(19.9MiB/10014msec) 00:28:44.874 slat (usec): min=6, max=123, avg=15.50, stdev=17.59 00:28:44.874 clat (usec): min=20760, max=59856, avg=31376.13, stdev=2012.41 00:28:44.874 lat (usec): min=20776, max=59889, avg=31391.64, stdev=2011.49 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.874 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.874 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.874 | 99.00th=[33424], 99.50th=[40109], 99.90th=[60031], 99.95th=[60031], 00:28:44.874 | 99.99th=[60031] 00:28:44.874 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2028.80, stdev=62.64, samples=20 00:28:44.874 iops : min= 480, max= 544, avg=507.20, stdev=15.66, samples=20 00:28:44.874 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.874 cpu : usr=98.96%, sys=0.58%, ctx=15, majf=0, minf=1636 00:28:44.874 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename1: (groupid=0, jobs=1): err= 0: pid=2871841: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=507, BW=2028KiB/s (2077kB/s)(19.8MiB/10006msec) 00:28:44.874 slat (usec): min=6, max=126, avg=30.53, stdev=22.52 00:28:44.874 clat (usec): min=7403, max=88831, avg=31268.20, stdev=3821.09 00:28:44.874 lat (usec): min=7410, max=88862, avg=31298.73, stdev=3820.95 00:28:44.874 clat percentiles (usec): 00:28:44.874 | 1.00th=[24511], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.874 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.874 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.874 | 99.00th=[36439], 99.50th=[49546], 99.90th=[88605], 99.95th=[88605], 00:28:44.874 | 99.99th=[88605] 00:28:44.874 bw ( KiB/s): min= 1792, max= 2112, per=4.16%, avg=2026.11, stdev=65.58, samples=19 00:28:44.874 iops : min= 448, max= 528, avg=506.53, stdev=16.40, samples=19 00:28:44.874 lat (msec) : 10=0.20%, 20=0.32%, 50=99.17%, 100=0.32% 00:28:44.874 cpu : usr=98.99%, sys=0.55%, ctx=13, majf=0, minf=1636 00:28:44.874 IO depths : 1=4.5%, 2=10.1%, 4=22.6%, 8=54.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:28:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.874 issued rwts: total=5074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.874 filename1: (groupid=0, jobs=1): err= 0: pid=2871842: Wed May 15 10:47:59 2024 00:28:44.874 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10011msec) 00:28:44.874 slat (usec): min=6, max=128, avg=45.36, stdev=25.01 00:28:44.874 clat (usec): min=22840, max=52983, avg=31039.15, stdev=1447.79 00:28:44.874 lat (usec): min=22849, max=53046, avg=31084.51, stdev=1449.30 00:28:44.874 clat percentiles (usec): 00:28:44.875 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30540], 00:28:44.875 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:44.875 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.875 | 99.00th=[32637], 99.50th=[33162], 99.90th=[52691], 99.95th=[52691], 00:28:44.875 | 99.99th=[53216] 00:28:44.875 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2027.79, stdev=64.19, samples=19 00:28:44.875 iops : min= 480, max= 544, avg=506.95, stdev=16.05, samples=19 00:28:44.875 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.875 cpu : usr=99.03%, sys=0.52%, ctx=17, majf=0, minf=1636 00:28:44.875 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.875 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.875 filename1: (groupid=0, jobs=1): err= 0: pid=2871843: Wed May 15 10:47:59 2024 00:28:44.875 read: IOPS=507, BW=2028KiB/s (2077kB/s)(19.8MiB/10003msec) 00:28:44.875 slat (usec): min=6, max=139, avg=33.12, stdev=20.39 00:28:44.875 clat (usec): min=21115, max=79376, avg=31250.21, stdev=2827.60 00:28:44.875 lat (usec): min=21124, max=79407, avg=31283.32, stdev=2826.84 00:28:44.875 clat percentiles (usec): 00:28:44.875 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.875 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.875 | 70.00th=[31589], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.875 | 99.00th=[32637], 99.50th=[32900], 99.90th=[79168], 99.95th=[79168], 00:28:44.875 | 99.99th=[79168] 00:28:44.875 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2027.79, stdev=77.07, samples=19 00:28:44.875 iops : min= 448, max= 544, avg=506.95, stdev=19.27, samples=19 00:28:44.875 lat (msec) : 50=99.68%, 100=0.32% 00:28:44.875 cpu : usr=99.02%, sys=0.54%, ctx=13, majf=0, minf=1636 00:28:44.875 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.875 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.875 filename1: (groupid=0, jobs=1): err= 0: pid=2871844: Wed May 15 10:47:59 2024 00:28:44.875 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10006msec) 00:28:44.875 slat (usec): min=6, max=117, avg=31.05, stdev=21.82 00:28:44.875 clat (usec): min=5873, max=76506, avg=31146.37, stdev=3443.68 00:28:44.875 lat (usec): min=5885, max=76536, avg=31177.43, stdev=3444.38 00:28:44.875 clat percentiles (usec): 00:28:44.875 | 1.00th=[22938], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.875 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.875 | 70.00th=[31589], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.875 | 99.00th=[39584], 99.50th=[49546], 99.90th=[76022], 99.95th=[76022], 00:28:44.875 | 99.99th=[76022] 00:28:44.875 bw ( KiB/s): min= 1792, max= 2160, per=4.17%, avg=2030.32, stdev=84.65, samples=19 00:28:44.875 iops : min= 448, max= 540, avg=507.58, stdev=21.16, samples=19 00:28:44.875 lat (msec) : 10=0.20%, 20=0.67%, 50=98.78%, 100=0.35% 00:28:44.875 cpu : usr=98.86%, sys=0.68%, ctx=13, majf=0, minf=1635 00:28:44.875 IO depths : 1=2.0%, 2=7.7%, 4=22.7%, 8=56.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 complete : 0=0.0%, 4=93.8%, 8=1.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 issued rwts: total=5096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.875 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.875 filename2: (groupid=0, jobs=1): err= 0: pid=2871845: Wed May 15 10:47:59 2024 00:28:44.875 read: IOPS=506, BW=2028KiB/s (2076kB/s)(19.8MiB/10005msec) 00:28:44.875 slat (usec): min=6, max=123, avg=29.96, stdev=21.10 00:28:44.875 clat (usec): min=11898, max=81457, avg=31336.27, stdev=3667.16 00:28:44.875 lat (usec): min=11904, max=81485, avg=31366.22, stdev=3666.99 00:28:44.875 clat percentiles (usec): 00:28:44.875 | 1.00th=[17957], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.875 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.875 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.875 | 99.00th=[44827], 99.50th=[45876], 99.90th=[81265], 99.95th=[81265], 00:28:44.875 | 99.99th=[81265] 00:28:44.875 bw ( KiB/s): min= 1792, max= 2160, per=4.15%, avg=2021.05, stdev=78.03, samples=19 00:28:44.875 iops : min= 448, max= 540, avg=505.26, stdev=19.51, samples=19 00:28:44.875 lat (msec) : 20=1.26%, 50=98.42%, 100=0.32% 00:28:44.875 cpu : usr=98.87%, sys=0.68%, ctx=10, majf=0, minf=1636 00:28:44.875 IO depths : 1=2.3%, 2=8.1%, 4=23.3%, 8=55.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:28:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.875 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.875 filename2: (groupid=0, jobs=1): err= 0: pid=2871846: Wed May 15 10:47:59 2024 00:28:44.875 read: IOPS=509, BW=2038KiB/s (2086kB/s)(19.9MiB/10008msec) 00:28:44.875 slat (usec): min=5, max=126, avg=39.71, stdev=24.07 00:28:44.875 clat (usec): min=7992, max=79311, avg=31050.81, stdev=3469.00 00:28:44.875 lat (usec): min=7999, max=79336, avg=31090.52, stdev=3469.91 00:28:44.875 clat percentiles (usec): 00:28:44.875 | 1.00th=[22152], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.875 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:44.875 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.875 | 99.00th=[35914], 99.50th=[47973], 99.90th=[79168], 99.95th=[79168], 00:28:44.875 | 99.99th=[79168] 00:28:44.875 bw ( KiB/s): min= 1792, max= 2144, per=4.18%, avg=2032.84, stdev=78.83, samples=19 00:28:44.875 iops : min= 448, max= 536, avg=508.21, stdev=19.71, samples=19 00:28:44.875 lat (msec) : 10=0.27%, 20=0.67%, 50=98.74%, 100=0.31% 00:28:44.875 cpu : usr=98.92%, sys=0.62%, ctx=13, majf=0, minf=1636 00:28:44.875 IO depths : 1=5.3%, 2=10.9%, 4=22.4%, 8=53.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 issued rwts: total=5098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.875 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.875 filename2: (groupid=0, jobs=1): err= 0: pid=2871847: Wed May 15 10:47:59 2024 00:28:44.875 read: IOPS=506, BW=2027KiB/s (2076kB/s)(19.8MiB/10008msec) 00:28:44.875 slat (nsec): min=6079, max=97419, avg=25918.52, stdev=20559.10 00:28:44.875 clat (usec): min=17707, max=84652, avg=31369.15, stdev=3196.44 00:28:44.875 lat (usec): min=17714, max=84680, avg=31395.07, stdev=3195.25 00:28:44.875 clat percentiles (usec): 00:28:44.875 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:44.875 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.875 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.875 | 99.00th=[32637], 99.50th=[32900], 99.90th=[84411], 99.95th=[84411], 00:28:44.875 | 99.99th=[84411] 00:28:44.875 bw ( KiB/s): min= 1792, max= 2176, per=4.15%, avg=2021.05, stdev=80.72, samples=19 00:28:44.875 iops : min= 448, max= 544, avg=505.26, stdev=20.18, samples=19 00:28:44.875 lat (msec) : 20=0.16%, 50=99.53%, 100=0.32% 00:28:44.875 cpu : usr=98.77%, sys=0.78%, ctx=15, majf=0, minf=1634 00:28:44.875 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.875 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.875 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.875 filename2: (groupid=0, jobs=1): err= 0: pid=2871848: Wed May 15 10:47:59 2024 00:28:44.875 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10010msec) 00:28:44.875 slat (usec): min=6, max=114, avg=38.46, stdev=23.36 00:28:44.875 clat (usec): min=20873, max=56391, avg=31178.46, stdev=1694.82 00:28:44.875 lat (usec): min=20906, max=56417, avg=31216.93, stdev=1693.29 00:28:44.875 clat percentiles (usec): 00:28:44.875 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.875 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.876 | 70.00th=[31589], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.876 | 99.00th=[32900], 99.50th=[33162], 99.90th=[56361], 99.95th=[56361], 00:28:44.876 | 99.99th=[56361] 00:28:44.876 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2034.53, stdev=58.73, samples=19 00:28:44.876 iops : min= 480, max= 544, avg=508.63, stdev=14.68, samples=19 00:28:44.876 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.876 cpu : usr=99.01%, sys=0.54%, ctx=14, majf=0, minf=1637 00:28:44.876 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.876 filename2: (groupid=0, jobs=1): err= 0: pid=2871849: Wed May 15 10:47:59 2024 00:28:44.876 read: IOPS=509, BW=2036KiB/s (2085kB/s)(19.9MiB/10003msec) 00:28:44.876 slat (nsec): min=6800, max=99734, avg=31823.07, stdev=20422.52 00:28:44.876 clat (usec): min=13437, max=79104, avg=31138.38, stdev=3289.99 00:28:44.876 lat (usec): min=13446, max=79129, avg=31170.20, stdev=3290.24 00:28:44.876 clat percentiles (usec): 00:28:44.876 | 1.00th=[21365], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.876 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:44.876 | 70.00th=[31589], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.876 | 99.00th=[36439], 99.50th=[44303], 99.90th=[79168], 99.95th=[79168], 00:28:44.876 | 99.99th=[79168] 00:28:44.876 bw ( KiB/s): min= 1792, max= 2240, per=4.18%, avg=2036.21, stdev=91.27, samples=19 00:28:44.876 iops : min= 448, max= 560, avg=509.05, stdev=22.82, samples=19 00:28:44.876 lat (msec) : 20=0.86%, 50=98.82%, 100=0.31% 00:28:44.876 cpu : usr=98.86%, sys=0.73%, ctx=16, majf=0, minf=1634 00:28:44.876 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:44.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 issued rwts: total=5092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.876 filename2: (groupid=0, jobs=1): err= 0: pid=2871850: Wed May 15 10:47:59 2024 00:28:44.876 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10011msec) 00:28:44.876 slat (usec): min=5, max=116, avg=44.07, stdev=22.21 00:28:44.876 clat (usec): min=23780, max=53109, avg=31070.01, stdev=1453.13 00:28:44.876 lat (usec): min=23788, max=53139, avg=31114.08, stdev=1453.53 00:28:44.876 clat percentiles (usec): 00:28:44.876 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30540], 00:28:44.876 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:44.876 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.876 | 99.00th=[32637], 99.50th=[33162], 99.90th=[53216], 99.95th=[53216], 00:28:44.876 | 99.99th=[53216] 00:28:44.876 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2027.79, stdev=64.19, samples=19 00:28:44.876 iops : min= 480, max= 544, avg=506.95, stdev=16.05, samples=19 00:28:44.876 lat (msec) : 50=99.69%, 100=0.31% 00:28:44.876 cpu : usr=98.94%, sys=0.60%, ctx=15, majf=0, minf=1634 00:28:44.876 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.876 filename2: (groupid=0, jobs=1): err= 0: pid=2871851: Wed May 15 10:47:59 2024 00:28:44.876 read: IOPS=507, BW=2028KiB/s (2077kB/s)(19.8MiB/10003msec) 00:28:44.876 slat (usec): min=5, max=119, avg=43.94, stdev=21.93 00:28:44.876 clat (usec): min=27495, max=70985, avg=31168.75, stdev=2336.11 00:28:44.876 lat (usec): min=27530, max=71011, avg=31212.69, stdev=2334.81 00:28:44.876 clat percentiles (usec): 00:28:44.876 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.876 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:44.876 | 70.00th=[31327], 80.00th=[31589], 90.00th=[31851], 95.00th=[32113], 00:28:44.876 | 99.00th=[32900], 99.50th=[33162], 99.90th=[70779], 99.95th=[70779], 00:28:44.876 | 99.99th=[70779] 00:28:44.876 bw ( KiB/s): min= 1795, max= 2176, per=4.17%, avg=2027.95, stdev=76.57, samples=19 00:28:44.876 iops : min= 448, max= 544, avg=506.95, stdev=19.27, samples=19 00:28:44.876 lat (msec) : 50=99.68%, 100=0.32% 00:28:44.876 cpu : usr=99.02%, sys=0.53%, ctx=15, majf=0, minf=1636 00:28:44.876 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.876 filename2: (groupid=0, jobs=1): err= 0: pid=2871852: Wed May 15 10:47:59 2024 00:28:44.876 read: IOPS=508, BW=2032KiB/s (2081kB/s)(19.9MiB/10015msec) 00:28:44.876 slat (nsec): min=6113, max=98359, avg=26249.56, stdev=15465.36 00:28:44.876 clat (usec): min=14537, max=69735, avg=31259.38, stdev=2537.91 00:28:44.876 lat (usec): min=14543, max=69774, avg=31285.63, stdev=2537.73 00:28:44.876 clat percentiles (usec): 00:28:44.876 | 1.00th=[25297], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:44.876 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.876 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:28:44.876 | 99.00th=[33424], 99.50th=[38011], 99.90th=[69731], 99.95th=[69731], 00:28:44.876 | 99.99th=[69731] 00:28:44.876 bw ( KiB/s): min= 1795, max= 2176, per=4.17%, avg=2028.95, stdev=74.66, samples=20 00:28:44.876 iops : min= 448, max= 544, avg=507.20, stdev=18.79, samples=20 00:28:44.876 lat (msec) : 20=0.31%, 50=99.37%, 100=0.31% 00:28:44.876 cpu : usr=98.90%, sys=0.65%, ctx=13, majf=0, minf=1635 00:28:44.876 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:44.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.876 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.876 00:28:44.876 Run status group 0 (all jobs): 00:28:44.876 READ: bw=47.5MiB/s (49.8MB/s), 2019KiB/s-2078KiB/s (2068kB/s-2128kB/s), io=478MiB (501MB), run=10003-10052msec 00:28:44.876 ----------------------------------------------------- 00:28:44.876 Suppressions used: 00:28:44.876 count bytes template 00:28:44.876 45 402 /usr/src/fio/parse.c 00:28:44.876 1 8 libtcmalloc_minimal.so 00:28:44.876 1 904 libcrypto.so 00:28:44.876 ----------------------------------------------------- 00:28:44.876 00:28:44.876 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:44.876 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:44.876 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 bdev_null0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 [2024-05-15 10:48:00.490447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 bdev_null1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:44.877 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.878 { 00:28:44.878 "params": { 00:28:44.878 "name": "Nvme$subsystem", 00:28:44.878 "trtype": "$TEST_TRANSPORT", 00:28:44.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.878 "adrfam": "ipv4", 00:28:44.878 "trsvcid": "$NVMF_PORT", 00:28:44.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.878 "hdgst": ${hdgst:-false}, 00:28:44.878 "ddgst": ${ddgst:-false} 00:28:44.878 }, 00:28:44.878 "method": "bdev_nvme_attach_controller" 00:28:44.878 } 00:28:44.878 EOF 00:28:44.878 )") 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.878 { 00:28:44.878 "params": { 00:28:44.878 "name": "Nvme$subsystem", 00:28:44.878 "trtype": "$TEST_TRANSPORT", 00:28:44.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.878 "adrfam": "ipv4", 00:28:44.878 "trsvcid": "$NVMF_PORT", 00:28:44.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.878 "hdgst": ${hdgst:-false}, 00:28:44.878 "ddgst": ${ddgst:-false} 00:28:44.878 }, 00:28:44.878 "method": "bdev_nvme_attach_controller" 00:28:44.878 } 00:28:44.878 EOF 00:28:44.878 )") 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:44.878 "params": { 00:28:44.878 "name": "Nvme0", 00:28:44.878 "trtype": "tcp", 00:28:44.878 "traddr": "10.0.0.2", 00:28:44.878 "adrfam": "ipv4", 00:28:44.878 "trsvcid": "4420", 00:28:44.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:44.878 "hdgst": false, 00:28:44.878 "ddgst": false 00:28:44.878 }, 00:28:44.878 "method": "bdev_nvme_attach_controller" 00:28:44.878 },{ 00:28:44.878 "params": { 00:28:44.878 "name": "Nvme1", 00:28:44.878 "trtype": "tcp", 00:28:44.878 "traddr": "10.0.0.2", 00:28:44.878 "adrfam": "ipv4", 00:28:44.878 "trsvcid": "4420", 00:28:44.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.878 "hdgst": false, 00:28:44.878 "ddgst": false 00:28:44.878 }, 00:28:44.878 "method": "bdev_nvme_attach_controller" 00:28:44.878 }' 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # break 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:44.878 10:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:45.137 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:45.137 ... 00:28:45.137 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:45.137 ... 00:28:45.137 fio-3.35 00:28:45.137 Starting 4 threads 00:28:45.137 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.974 00:28:51.974 filename0: (groupid=0, jobs=1): err= 0: pid=2874584: Wed May 15 10:48:06 2024 00:28:51.974 read: IOPS=2626, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:28:51.974 slat (nsec): min=4974, max=69412, avg=9753.17, stdev=6385.49 00:28:51.974 clat (usec): min=728, max=6833, avg=3017.30, stdev=575.69 00:28:51.974 lat (usec): min=737, max=6846, avg=3027.05, stdev=576.66 00:28:51.974 clat percentiles (usec): 00:28:51.974 | 1.00th=[ 1811], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2573], 00:28:51.974 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 3064], 00:28:51.974 | 70.00th=[ 3195], 80.00th=[ 3458], 90.00th=[ 3752], 95.00th=[ 3949], 00:28:51.974 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 6325], 99.95th=[ 6718], 00:28:51.974 | 99.99th=[ 6849] 00:28:51.974 bw ( KiB/s): min=18160, max=23552, per=25.53%, avg=20780.44, stdev=1682.60, samples=9 00:28:51.974 iops : min= 2270, max= 2944, avg=2597.56, stdev=210.33, samples=9 00:28:51.974 lat (usec) : 750=0.01%, 1000=0.05% 00:28:51.974 lat (msec) : 2=2.18%, 4=93.23%, 10=4.52% 00:28:51.974 cpu : usr=97.62%, sys=2.04%, ctx=16, majf=0, minf=1634 00:28:51.974 IO depths : 1=0.1%, 2=7.6%, 4=62.7%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 issued rwts: total=13137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.974 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.974 filename0: (groupid=0, jobs=1): err= 0: pid=2874585: Wed May 15 10:48:06 2024 00:28:51.974 read: IOPS=2503, BW=19.6MiB/s (20.5MB/s)(97.8MiB/5002msec) 00:28:51.974 slat (nsec): min=5130, max=76777, avg=9163.32, stdev=6689.29 00:28:51.974 clat (usec): min=590, max=6842, avg=3166.13, stdev=622.91 00:28:51.974 lat (usec): min=599, max=6848, avg=3175.29, stdev=623.13 00:28:51.974 clat percentiles (usec): 00:28:51.974 | 1.00th=[ 1795], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2769], 00:28:51.974 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3163], 00:28:51.974 | 70.00th=[ 3359], 80.00th=[ 3589], 90.00th=[ 3884], 95.00th=[ 4293], 00:28:51.974 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 6325], 99.95th=[ 6521], 00:28:51.974 | 99.99th=[ 6783] 00:28:51.974 bw ( KiB/s): min=17216, max=21424, per=24.61%, avg=20030.40, stdev=1535.56, samples=10 00:28:51.974 iops : min= 2152, max= 2678, avg=2503.80, stdev=191.95, samples=10 00:28:51.974 lat (usec) : 750=0.02%, 1000=0.10% 00:28:51.974 lat (msec) : 2=1.51%, 4=90.47%, 10=7.90% 00:28:51.974 cpu : usr=97.86%, sys=1.80%, ctx=7, majf=0, minf=1636 00:28:51.974 IO depths : 1=0.1%, 2=9.0%, 4=62.8%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 issued rwts: total=12524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.974 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.974 filename1: (groupid=0, jobs=1): err= 0: pid=2874586: Wed May 15 10:48:06 2024 00:28:51.974 read: IOPS=2593, BW=20.3MiB/s (21.2MB/s)(101MiB/5002msec) 00:28:51.974 slat (usec): min=4, max=257, avg= 9.23, stdev= 7.00 00:28:51.974 clat (usec): min=616, max=7190, avg=3055.55, stdev=612.03 00:28:51.974 lat (usec): min=626, max=7214, avg=3064.78, stdev=612.40 00:28:51.974 clat percentiles (usec): 00:28:51.974 | 1.00th=[ 1778], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2606], 00:28:51.974 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3097], 00:28:51.974 | 70.00th=[ 3261], 80.00th=[ 3523], 90.00th=[ 3785], 95.00th=[ 4080], 00:28:51.974 | 99.00th=[ 5014], 99.50th=[ 5342], 99.90th=[ 5932], 99.95th=[ 6194], 00:28:51.974 | 99.99th=[ 6521] 00:28:51.974 bw ( KiB/s): min=18368, max=22864, per=25.48%, avg=20744.70, stdev=1685.91, samples=10 00:28:51.974 iops : min= 2296, max= 2858, avg=2593.00, stdev=210.79, samples=10 00:28:51.974 lat (usec) : 750=0.02%, 1000=0.05% 00:28:51.974 lat (msec) : 2=2.50%, 4=91.61%, 10=5.81% 00:28:51.974 cpu : usr=97.28%, sys=2.42%, ctx=7, majf=0, minf=1635 00:28:51.974 IO depths : 1=0.1%, 2=10.3%, 4=60.9%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 issued rwts: total=12971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.974 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.974 filename1: (groupid=0, jobs=1): err= 0: pid=2874588: Wed May 15 10:48:06 2024 00:28:51.974 read: IOPS=2452, BW=19.2MiB/s (20.1MB/s)(95.8MiB/5001msec) 00:28:51.974 slat (nsec): min=4517, max=76779, avg=9254.88, stdev=7128.92 00:28:51.974 clat (usec): min=596, max=6768, avg=3232.88, stdev=664.73 00:28:51.974 lat (usec): min=602, max=6778, avg=3242.14, stdev=664.85 00:28:51.974 clat percentiles (usec): 00:28:51.974 | 1.00th=[ 1926], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2802], 00:28:51.974 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3097], 60.00th=[ 3228], 00:28:51.974 | 70.00th=[ 3425], 80.00th=[ 3687], 90.00th=[ 4015], 95.00th=[ 4490], 00:28:51.974 | 99.00th=[ 5538], 99.50th=[ 5932], 99.90th=[ 6456], 99.95th=[ 6587], 00:28:51.974 | 99.99th=[ 6783] 00:28:51.974 bw ( KiB/s): min=16544, max=21296, per=24.10%, avg=19616.80, stdev=1636.88, samples=10 00:28:51.974 iops : min= 2068, max= 2662, avg=2452.10, stdev=204.61, samples=10 00:28:51.974 lat (usec) : 750=0.03%, 1000=0.10% 00:28:51.974 lat (msec) : 2=1.18%, 4=88.68%, 10=10.00% 00:28:51.974 cpu : usr=98.12%, sys=1.56%, ctx=6, majf=0, minf=1637 00:28:51.974 IO depths : 1=0.1%, 2=8.0%, 4=63.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.974 issued rwts: total=12266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.974 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.974 00:28:51.974 Run status group 0 (all jobs): 00:28:51.974 READ: bw=79.5MiB/s (83.4MB/s), 19.2MiB/s-20.5MiB/s (20.1MB/s-21.5MB/s), io=398MiB (417MB), run=5001-5002msec 00:28:51.974 ----------------------------------------------------- 00:28:51.974 Suppressions used: 00:28:51.974 count bytes template 00:28:51.974 6 52 /usr/src/fio/parse.c 00:28:51.974 1 8 libtcmalloc_minimal.so 00:28:51.974 1 904 libcrypto.so 00:28:51.974 ----------------------------------------------------- 00:28:51.974 00:28:51.974 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:51.974 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:51.974 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.974 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 00:28:51.975 real 0m26.454s 00:28:51.975 user 5m26.167s 00:28:51.975 sys 0m3.657s 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 ************************************ 00:28:51.975 END TEST fio_dif_rand_params 00:28:51.975 ************************************ 00:28:51.975 10:48:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:51.975 10:48:07 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:51.975 10:48:07 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 ************************************ 00:28:51.975 START TEST fio_dif_digest 00:28:51.975 ************************************ 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 bdev_null0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.975 [2024-05-15 10:48:07.523947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:51.975 { 00:28:51.975 "params": { 00:28:51.975 "name": "Nvme$subsystem", 00:28:51.975 "trtype": "$TEST_TRANSPORT", 00:28:51.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.975 "adrfam": "ipv4", 00:28:51.975 "trsvcid": "$NVMF_PORT", 00:28:51.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.975 "hdgst": ${hdgst:-false}, 00:28:51.975 "ddgst": ${ddgst:-false} 00:28:51.975 }, 00:28:51.975 "method": "bdev_nvme_attach_controller" 00:28:51.975 } 00:28:51.975 EOF 00:28:51.975 )") 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:51.975 "params": { 00:28:51.975 "name": "Nvme0", 00:28:51.975 "trtype": "tcp", 00:28:51.975 "traddr": "10.0.0.2", 00:28:51.975 "adrfam": "ipv4", 00:28:51.975 "trsvcid": "4420", 00:28:51.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.975 "hdgst": true, 00:28:51.975 "ddgst": true 00:28:51.975 }, 00:28:51.975 "method": "bdev_nvme_attach_controller" 00:28:51.975 }' 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # break 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:51.975 10:48:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.233 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:52.233 ... 00:28:52.233 fio-3.35 00:28:52.233 Starting 3 threads 00:28:52.233 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.464 00:29:04.464 filename0: (groupid=0, jobs=1): err= 0: pid=2876530: Wed May 15 10:48:18 2024 00:29:04.464 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10045msec) 00:29:04.464 slat (nsec): min=4269, max=46064, avg=7722.07, stdev=1429.13 00:29:04.464 clat (usec): min=7944, max=49209, avg=10722.87, stdev=1252.43 00:29:04.464 lat (usec): min=7952, max=49216, avg=10730.60, stdev=1252.55 00:29:04.464 clat percentiles (usec): 00:29:04.464 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10159], 00:29:04.464 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:29:04.464 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:04.464 | 99.00th=[12911], 99.50th=[13435], 99.90th=[15664], 99.95th=[47973], 00:29:04.464 | 99.99th=[49021] 00:29:04.464 bw ( KiB/s): min=34816, max=37120, per=32.42%, avg=35865.60, stdev=483.59, samples=20 00:29:04.464 iops : min= 272, max= 290, avg=280.20, stdev= 3.78, samples=20 00:29:04.464 lat (msec) : 10=15.01%, 20=84.91%, 50=0.07% 00:29:04.464 cpu : usr=97.39%, sys=2.35%, ctx=14, majf=0, minf=1636 00:29:04.464 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:04.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.464 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.464 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:04.464 filename0: (groupid=0, jobs=1): err= 0: pid=2876531: Wed May 15 10:48:18 2024 00:29:04.464 read: IOPS=300, BW=37.6MiB/s (39.5MB/s)(378MiB/10047msec) 00:29:04.464 slat (nsec): min=3604, max=54769, avg=10358.03, stdev=2508.91 00:29:04.464 clat (usec): min=7361, max=48199, avg=9940.72, stdev=1192.21 00:29:04.464 lat (usec): min=7370, max=48210, avg=9951.07, stdev=1192.20 00:29:04.464 clat percentiles (usec): 00:29:04.464 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:29:04.464 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:29:04.464 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:29:04.464 | 99.00th=[12256], 99.50th=[13042], 99.90th=[14091], 99.95th=[46924], 00:29:04.464 | 99.99th=[47973] 00:29:04.464 bw ( KiB/s): min=35328, max=39424, per=34.95%, avg=38668.80, stdev=937.76, samples=20 00:29:04.464 iops : min= 276, max= 308, avg=302.10, stdev= 7.33, samples=20 00:29:04.464 lat (msec) : 10=56.91%, 20=43.02%, 50=0.07% 00:29:04.464 cpu : usr=96.57%, sys=2.91%, ctx=644, majf=0, minf=1634 00:29:04.464 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:04.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.464 issued rwts: total=3024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.464 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:04.464 filename0: (groupid=0, jobs=1): err= 0: pid=2876532: Wed May 15 10:48:18 2024 00:29:04.464 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(357MiB/10005msec) 00:29:04.464 slat (nsec): min=5752, max=30870, avg=9181.77, stdev=2018.62 00:29:04.464 clat (usec): min=5289, max=16215, avg=10497.92, stdev=741.90 00:29:04.464 lat (usec): min=5301, max=16224, avg=10507.10, stdev=741.93 00:29:04.465 clat percentiles (usec): 00:29:04.465 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:29:04.465 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:29:04.465 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:29:04.465 | 99.00th=[12649], 99.50th=[13435], 99.90th=[16057], 99.95th=[16188], 00:29:04.465 | 99.99th=[16188] 00:29:04.465 bw ( KiB/s): min=35072, max=37632, per=33.01%, avg=36518.40, stdev=565.00, samples=20 00:29:04.465 iops : min= 274, max= 294, avg=285.30, stdev= 4.41, samples=20 00:29:04.465 lat (msec) : 10=22.02%, 20=77.98% 00:29:04.465 cpu : usr=97.47%, sys=2.28%, ctx=16, majf=0, minf=1632 00:29:04.465 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:04.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.465 issued rwts: total=2856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:04.465 00:29:04.465 Run status group 0 (all jobs): 00:29:04.465 READ: bw=108MiB/s (113MB/s), 34.9MiB/s-37.6MiB/s (36.6MB/s-39.5MB/s), io=1086MiB (1138MB), run=10005-10047msec 00:29:04.465 ----------------------------------------------------- 00:29:04.465 Suppressions used: 00:29:04.465 count bytes template 00:29:04.465 5 44 /usr/src/fio/parse.c 00:29:04.465 1 8 libtcmalloc_minimal.so 00:29:04.465 1 904 libcrypto.so 00:29:04.465 ----------------------------------------------------- 00:29:04.465 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.465 00:29:04.465 real 0m11.807s 00:29:04.465 user 0m46.063s 00:29:04.465 sys 0m1.179s 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:04.465 10:48:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:04.465 ************************************ 00:29:04.465 END TEST fio_dif_digest 00:29:04.465 ************************************ 00:29:04.465 10:48:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:04.465 10:48:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.465 rmmod nvme_tcp 00:29:04.465 rmmod nvme_fabrics 00:29:04.465 rmmod nvme_keyring 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2864716 ']' 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2864716 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 2864716 ']' 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 2864716 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2864716 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2864716' 00:29:04.465 killing process with pid 2864716 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@966 -- # kill 2864716 00:29:04.465 [2024-05-15 10:48:19.439038] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:04.465 10:48:19 nvmf_dif -- common/autotest_common.sh@971 -- # wait 2864716 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:04.465 10:48:19 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:29:06.402 Waiting for block devices as requested 00:29:06.402 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:29:06.660 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:06.660 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:06.660 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:06.660 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:29:06.918 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:06.918 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:06.918 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:06.918 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:07.176 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:07.176 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:07.176 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:07.176 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:07.437 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:07.437 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:07.437 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:07.437 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:07.697 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:29:07.697 10:48:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:07.697 10:48:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:07.697 10:48:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:07.697 10:48:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:07.697 10:48:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.697 10:48:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:07.697 10:48:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.240 10:48:25 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:10.240 00:29:10.240 real 1m17.649s 00:29:10.240 user 8m27.771s 00:29:10.240 sys 0m15.827s 00:29:10.240 10:48:25 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:10.240 10:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:10.240 ************************************ 00:29:10.240 END TEST nvmf_dif 00:29:10.240 ************************************ 00:29:10.240 10:48:25 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:10.240 10:48:25 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:10.240 10:48:25 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:10.240 10:48:25 -- common/autotest_common.sh@10 -- # set +x 00:29:10.240 ************************************ 00:29:10.240 START TEST nvmf_abort_qd_sizes 00:29:10.240 ************************************ 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:10.240 * Looking for test storage... 00:29:10.240 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:10.240 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:10.241 10:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:16.812 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:16.812 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:16.812 Found net devices under 0000:27:00.0: cvl_0_0 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.812 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:16.813 Found net devices under 0000:27:00.1: cvl_0_1 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:16.813 10:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:16.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:29:16.813 00:29:16.813 --- 10.0.0.2 ping statistics --- 00:29:16.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.813 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:29:16.813 00:29:16.813 --- 10.0.0.1 ping statistics --- 00:29:16.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.813 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:16.813 10:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:29:19.350 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.350 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.350 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.350 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.350 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.350 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.351 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.351 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.351 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.351 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.351 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.351 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.351 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.351 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.351 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:19.351 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:29:19.916 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:29:20.175 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2886019 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2886019 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 2886019 ']' 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:20.434 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 [2024-05-15 10:48:36.258666] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:29:20.434 [2024-05-15 10:48:36.258761] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.693 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.693 [2024-05-15 10:48:36.386455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.693 [2024-05-15 10:48:36.485030] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.693 [2024-05-15 10:48:36.485082] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.693 [2024-05-15 10:48:36.485091] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.693 [2024-05-15 10:48:36.485101] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.693 [2024-05-15 10:48:36.485110] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.693 [2024-05-15 10:48:36.485171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.693 [2024-05-15 10:48:36.485191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.693 [2024-05-15 10:48:36.485222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.693 [2024-05-15 10:48:36.485234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.259 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:21.259 10:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:29:21.259 10:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:03:00.0 0000:c9:00.0 ]] 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:03:00.0 ]] 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:03:00.0 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:21.259 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:21.259 ************************************ 00:29:21.259 START TEST spdk_target_abort 00:29:21.259 ************************************ 00:29:21.259 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:29:21.259 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:21.259 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:03:00.0 -b spdk_target 00:29:21.259 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.259 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.824 spdk_targetn1 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.824 [2024-05-15 10:48:37.452525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.824 [2024-05-15 10:48:37.480499] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:21.824 [2024-05-15 10:48:37.480813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:21.824 10:48:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.824 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.176 Initializing NVMe Controllers 00:29:25.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:25.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:25.176 Initialization complete. Launching workers. 00:29:25.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16970, failed: 0 00:29:25.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1791, failed to submit 15179 00:29:25.176 success 746, unsuccess 1045, failed 0 00:29:25.176 10:48:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:25.176 10:48:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.176 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.464 Initializing NVMe Controllers 00:29:28.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:28.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:28.464 Initialization complete. Launching workers. 00:29:28.464 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8486, failed: 0 00:29:28.464 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7232 00:29:28.464 success 309, unsuccess 945, failed 0 00:29:28.465 10:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:28.465 10:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:28.465 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.756 Initializing NVMe Controllers 00:29:31.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:31.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:31.756 Initialization complete. Launching workers. 00:29:31.756 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40040, failed: 0 00:29:31.756 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2563, failed to submit 37477 00:29:31.756 success 596, unsuccess 1967, failed 0 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.756 10:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2886019 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 2886019 ']' 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 2886019 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2886019 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2886019' 00:29:32.323 killing process with pid 2886019 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 2886019 00:29:32.323 [2024-05-15 10:48:48.131996] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:32.323 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 2886019 00:29:32.891 00:29:32.891 real 0m11.438s 00:29:32.891 user 0m46.462s 00:29:32.891 sys 0m1.225s 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.891 ************************************ 00:29:32.891 END TEST spdk_target_abort 00:29:32.891 ************************************ 00:29:32.891 10:48:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:32.891 10:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:32.891 10:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:32.891 10:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:32.891 ************************************ 00:29:32.891 START TEST kernel_target_abort 00:29:32.891 ************************************ 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:32.891 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:32.892 10:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:29:35.431 Waiting for block devices as requested 00:29:35.431 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:29:35.692 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:35.692 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:35.692 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:35.692 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:29:35.953 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:35.953 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:35.953 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:35.953 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:36.213 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:36.213 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:36.213 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:36.213 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:36.473 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:36.473 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:36.473 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:36.473 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:36.731 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:37.667 No valid GPT data, bailing 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:29:37.667 No valid GPT data, bailing 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:37.667 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:29:37.667 00:29:37.667 Discovery Log Number of Records 2, Generation counter 2 00:29:37.667 =====Discovery Log Entry 0====== 00:29:37.667 trtype: tcp 00:29:37.667 adrfam: ipv4 00:29:37.667 subtype: current discovery subsystem 00:29:37.667 treq: not specified, sq flow control disable supported 00:29:37.667 portid: 1 00:29:37.667 trsvcid: 4420 00:29:37.667 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:37.667 traddr: 10.0.0.1 00:29:37.667 eflags: none 00:29:37.667 sectype: none 00:29:37.668 =====Discovery Log Entry 1====== 00:29:37.668 trtype: tcp 00:29:37.668 adrfam: ipv4 00:29:37.668 subtype: nvme subsystem 00:29:37.668 treq: not specified, sq flow control disable supported 00:29:37.668 portid: 1 00:29:37.668 trsvcid: 4420 00:29:37.668 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:37.668 traddr: 10.0.0.1 00:29:37.668 eflags: none 00:29:37.668 sectype: none 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:37.668 10:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:37.668 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.960 Initializing NVMe Controllers 00:29:40.960 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:40.960 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:40.960 Initialization complete. Launching workers. 00:29:40.960 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78475, failed: 0 00:29:40.960 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 78475, failed to submit 0 00:29:40.960 success 0, unsuccess 78475, failed 0 00:29:40.960 10:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:40.960 10:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:40.960 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.246 Initializing NVMe Controllers 00:29:44.246 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:44.246 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:44.246 Initialization complete. Launching workers. 00:29:44.246 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131217, failed: 0 00:29:44.246 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33202, failed to submit 98015 00:29:44.246 success 0, unsuccess 33202, failed 0 00:29:44.246 10:48:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:44.246 10:48:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:44.246 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.570 Initializing NVMe Controllers 00:29:47.571 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:47.571 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:47.571 Initialization complete. Launching workers. 00:29:47.571 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 125371, failed: 0 00:29:47.571 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31338, failed to submit 94033 00:29:47.571 success 0, unsuccess 31338, failed 0 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:47.571 10:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:29:49.475 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.734 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.734 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.734 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.734 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.734 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.734 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.734 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.734 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.734 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.734 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.992 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.992 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.992 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:29:49.992 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:49.992 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:29:50.562 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:29:50.823 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:29:51.084 00:29:51.084 real 0m18.121s 00:29:51.084 user 0m8.659s 00:29:51.084 sys 0m5.147s 00:29:51.084 10:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:51.084 10:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:51.084 ************************************ 00:29:51.084 END TEST kernel_target_abort 00:29:51.084 ************************************ 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:51.084 rmmod nvme_tcp 00:29:51.084 rmmod nvme_fabrics 00:29:51.084 rmmod nvme_keyring 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2886019 ']' 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2886019 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 2886019 ']' 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 2886019 00:29:51.084 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2886019) - No such process 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 2886019 is not found' 00:29:51.084 Process with pid 2886019 is not found 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:51.084 10:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:29:53.620 Waiting for block devices as requested 00:29:53.620 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:29:53.880 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:53.880 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:53.880 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:54.139 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.139 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:54.139 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.139 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:54.399 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.399 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:54.399 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.399 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.661 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.661 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:54.661 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.661 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:54.921 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:54.921 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:55.181 10:49:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.713 10:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.713 00:29:57.713 real 0m47.317s 00:29:57.713 user 0m59.049s 00:29:57.713 sys 0m15.055s 00:29:57.713 10:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:57.713 10:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:57.713 ************************************ 00:29:57.713 END TEST nvmf_abort_qd_sizes 00:29:57.713 ************************************ 00:29:57.713 10:49:13 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:29:57.713 10:49:13 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:57.713 10:49:13 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:57.713 10:49:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.713 ************************************ 00:29:57.713 START TEST keyring_file 00:29:57.713 ************************************ 00:29:57.713 10:49:13 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:29:57.713 * Looking for test storage... 00:29:57.713 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring 00:29:57.713 10:49:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/common.sh 00:29:57.713 10:49:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:57.713 10:49:13 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.713 10:49:13 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.713 10:49:13 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.713 10:49:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.713 10:49:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.713 10:49:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.713 10:49:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:57.713 10:49:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.713 10:49:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5FlZlTlqaa 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5FlZlTlqaa 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5FlZlTlqaa 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5FlZlTlqaa 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nLQq7dDuDz 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:57.714 10:49:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nLQq7dDuDz 00:29:57.714 10:49:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nLQq7dDuDz 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.nLQq7dDuDz 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=2896351 00:29:57.714 10:49:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2896351 00:29:57.714 10:49:13 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2896351 ']' 00:29:57.714 10:49:13 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.714 10:49:13 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:57.714 10:49:13 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.714 10:49:13 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:57.714 10:49:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:57.714 [2024-05-15 10:49:13.274949] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:29:57.714 [2024-05-15 10:49:13.275061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896351 ] 00:29:57.714 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.714 [2024-05-15 10:49:13.384845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.714 [2024-05-15 10:49:13.476796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.284 10:49:13 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:58.284 10:49:13 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:58.284 10:49:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:58.284 10:49:13 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.284 10:49:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:58.284 [2024-05-15 10:49:13.982137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.284 null0 00:29:58.284 [2024-05-15 10:49:14.014040] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:58.284 [2024-05-15 10:49:14.014133] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:58.284 [2024-05-15 10:49:14.014331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:58.284 [2024-05-15 10:49:14.022110] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.284 10:49:14 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:58.284 [2024-05-15 10:49:14.034094] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:58.284 request: 00:29:58.284 { 00:29:58.284 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:58.284 "secure_channel": false, 00:29:58.284 "listen_address": { 00:29:58.284 "trtype": "tcp", 00:29:58.284 "traddr": "127.0.0.1", 00:29:58.284 "trsvcid": "4420" 00:29:58.284 }, 00:29:58.284 "method": "nvmf_subsystem_add_listener", 00:29:58.284 "req_id": 1 00:29:58.284 } 00:29:58.284 Got JSON-RPC error response 00:29:58.284 response: 00:29:58.284 { 00:29:58.284 "code": -32602, 00:29:58.284 "message": "Invalid parameters" 00:29:58.284 } 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:58.284 10:49:14 keyring_file -- keyring/file.sh@46 -- # bperfpid=2896371 00:29:58.284 10:49:14 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2896371 /var/tmp/bperf.sock 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2896371 ']' 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:58.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:58.284 10:49:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:58.284 10:49:14 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:58.284 [2024-05-15 10:49:14.114250] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:29:58.284 [2024-05-15 10:49:14.114367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896371 ] 00:29:58.541 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.541 [2024-05-15 10:49:14.231817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.541 [2024-05-15 10:49:14.323484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.106 10:49:14 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:59.106 10:49:14 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:59.106 10:49:14 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:29:59.106 10:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:29:59.106 10:49:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nLQq7dDuDz 00:29:59.106 10:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nLQq7dDuDz 00:29:59.363 10:49:15 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:59.363 10:49:15 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:59.363 10:49:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.363 10:49:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:59.363 10:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.363 10:49:15 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.5FlZlTlqaa == \/\t\m\p\/\t\m\p\.\5\F\l\Z\l\T\l\q\a\a ]] 00:29:59.621 10:49:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:59.621 10:49:15 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:59.621 10:49:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.nLQq7dDuDz == \/\t\m\p\/\t\m\p\.\n\L\Q\q\7\d\D\u\D\z ]] 00:29:59.621 10:49:15 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.621 10:49:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:59.880 10:49:15 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:59.880 10:49:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:59.880 10:49:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:59.880 10:49:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:59.880 10:49:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.880 10:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.880 10:49:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:59.880 10:49:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:59.880 10:49:15 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:59.880 10:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:00.139 [2024-05-15 10:49:15.787607] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:00.139 nvme0n1 00:30:00.139 10:49:15 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:00.139 10:49:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.139 10:49:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:00.139 10:49:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.139 10:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.139 10:49:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:00.397 10:49:16 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:00.397 10:49:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:00.397 10:49:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:00.397 10:49:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.397 10:49:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.397 10:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.397 10:49:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:00.397 10:49:16 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:00.397 10:49:16 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.397 Running I/O for 1 seconds... 00:30:01.770 00:30:01.770 Latency(us) 00:30:01.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.770 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:01.770 nvme0n1 : 1.01 15234.77 59.51 0.00 0.00 8375.37 5035.92 17177.33 00:30:01.770 =================================================================================================================== 00:30:01.770 Total : 15234.77 59.51 0.00 0.00 8375.37 5035.92 17177.33 00:30:01.770 0 00:30:01.770 10:49:17 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:01.770 10:49:17 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:01.770 10:49:17 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:01.770 10:49:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.770 10:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.029 10:49:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:02.029 10:49:17 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:02.029 10:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:02.029 [2024-05-15 10:49:17.809296] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:02.029 [2024-05-15 10:49:17.810253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a8980 (107): Transport endpoint is not connected 00:30:02.029 [2024-05-15 10:49:17.811233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a8980 (9): Bad file descriptor 00:30:02.029 [2024-05-15 10:49:17.812230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:02.029 [2024-05-15 10:49:17.812246] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:02.029 [2024-05-15 10:49:17.812256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:02.029 request: 00:30:02.029 { 00:30:02.029 "name": "nvme0", 00:30:02.029 "trtype": "tcp", 00:30:02.029 "traddr": "127.0.0.1", 00:30:02.029 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.029 "adrfam": "ipv4", 00:30:02.029 "trsvcid": "4420", 00:30:02.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.029 "psk": "key1", 00:30:02.029 "method": "bdev_nvme_attach_controller", 00:30:02.029 "req_id": 1 00:30:02.029 } 00:30:02.029 Got JSON-RPC error response 00:30:02.029 response: 00:30:02.029 { 00:30:02.029 "code": -32602, 00:30:02.029 "message": "Invalid parameters" 00:30:02.029 } 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:02.029 10:49:17 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:02.030 10:49:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:02.030 10:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:02.030 10:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:02.030 10:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:02.030 10:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.030 10:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:02.290 10:49:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:02.290 10:49:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:02.290 10:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:02.290 10:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:02.290 10:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:02.290 10:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.290 10:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:02.290 10:49:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:02.290 10:49:18 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:02.290 10:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:02.550 10:49:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:02.550 10:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:02.810 10:49:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:02.810 10:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.810 10:49:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:02.810 10:49:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:02.810 10:49:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.5FlZlTlqaa 00:30:02.810 10:49:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.810 10:49:18 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:30:02.810 10:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:30:03.112 [2024-05-15 10:49:18.741221] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5FlZlTlqaa': 0100660 00:30:03.112 [2024-05-15 10:49:18.741262] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:03.112 request: 00:30:03.112 { 00:30:03.112 "name": "key0", 00:30:03.112 "path": "/tmp/tmp.5FlZlTlqaa", 00:30:03.112 "method": "keyring_file_add_key", 00:30:03.112 "req_id": 1 00:30:03.112 } 00:30:03.112 Got JSON-RPC error response 00:30:03.112 response: 00:30:03.112 { 00:30:03.112 "code": -1, 00:30:03.112 "message": "Operation not permitted" 00:30:03.112 } 00:30:03.112 10:49:18 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:30:03.112 10:49:18 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:03.112 10:49:18 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:03.112 10:49:18 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:03.112 10:49:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.5FlZlTlqaa 00:30:03.112 10:49:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:30:03.112 10:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FlZlTlqaa 00:30:03.112 10:49:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.5FlZlTlqaa 00:30:03.112 10:49:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:03.112 10:49:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:03.112 10:49:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:03.112 10:49:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:03.112 10:49:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:03.112 10:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.370 10:49:19 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:03.370 10:49:19 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.370 10:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.370 [2024-05-15 10:49:19.165351] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5FlZlTlqaa': No such file or directory 00:30:03.370 [2024-05-15 10:49:19.165381] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:03.370 [2024-05-15 10:49:19.165404] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:03.370 [2024-05-15 10:49:19.165413] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:03.370 [2024-05-15 10:49:19.165422] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:03.370 request: 00:30:03.370 { 00:30:03.370 "name": "nvme0", 00:30:03.370 "trtype": "tcp", 00:30:03.370 "traddr": "127.0.0.1", 00:30:03.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.370 "adrfam": "ipv4", 00:30:03.370 "trsvcid": "4420", 00:30:03.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.370 "psk": "key0", 00:30:03.370 "method": "bdev_nvme_attach_controller", 00:30:03.370 "req_id": 1 00:30:03.370 } 00:30:03.370 Got JSON-RPC error response 00:30:03.370 response: 00:30:03.370 { 00:30:03.370 "code": -19, 00:30:03.370 "message": "No such device" 00:30:03.370 } 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:03.370 10:49:19 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:03.370 10:49:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:03.370 10:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:03.628 10:49:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jE18gjmTU4 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:03.629 10:49:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:03.629 10:49:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:03.629 10:49:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:03.629 10:49:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:03.629 10:49:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:03.629 10:49:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jE18gjmTU4 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jE18gjmTU4 00:30:03.629 10:49:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.jE18gjmTU4 00:30:03.629 10:49:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jE18gjmTU4 00:30:03.629 10:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jE18gjmTU4 00:30:03.887 10:49:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.887 10:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.887 nvme0n1 00:30:03.887 10:49:19 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:03.887 10:49:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:03.887 10:49:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:04.147 10:49:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:04.147 10:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.147 10:49:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:04.147 10:49:19 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:04.147 10:49:19 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:04.147 10:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:04.408 10:49:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:04.408 10:49:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:04.408 10:49:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:04.408 10:49:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.408 10:49:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:04.669 10:49:20 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:04.669 10:49:20 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:04.669 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:04.669 10:49:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:04.669 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.669 10:49:20 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:04.927 10:49:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:04.927 10:49:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jE18gjmTU4 00:30:04.927 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jE18gjmTU4 00:30:05.185 10:49:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nLQq7dDuDz 00:30:05.185 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nLQq7dDuDz 00:30:05.185 10:49:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:05.185 10:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:05.442 nvme0n1 00:30:05.442 10:49:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:05.442 10:49:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:05.700 10:49:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:05.700 "subsystems": [ 00:30:05.700 { 00:30:05.700 "subsystem": "keyring", 00:30:05.700 "config": [ 00:30:05.700 { 00:30:05.700 "method": "keyring_file_add_key", 00:30:05.700 "params": { 00:30:05.700 "name": "key0", 00:30:05.700 "path": "/tmp/tmp.jE18gjmTU4" 00:30:05.700 } 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "method": "keyring_file_add_key", 00:30:05.700 "params": { 00:30:05.700 "name": "key1", 00:30:05.700 "path": "/tmp/tmp.nLQq7dDuDz" 00:30:05.700 } 00:30:05.700 } 00:30:05.700 ] 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "subsystem": "iobuf", 00:30:05.700 "config": [ 00:30:05.700 { 00:30:05.700 "method": "iobuf_set_options", 00:30:05.700 "params": { 00:30:05.700 "small_pool_count": 8192, 00:30:05.700 "large_pool_count": 1024, 00:30:05.700 "small_bufsize": 8192, 00:30:05.700 "large_bufsize": 135168 00:30:05.700 } 00:30:05.700 } 00:30:05.700 ] 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "subsystem": "sock", 00:30:05.700 "config": [ 00:30:05.700 { 00:30:05.700 "method": "sock_impl_set_options", 00:30:05.700 "params": { 00:30:05.700 "impl_name": "posix", 00:30:05.700 "recv_buf_size": 2097152, 00:30:05.700 "send_buf_size": 2097152, 00:30:05.700 "enable_recv_pipe": true, 00:30:05.700 "enable_quickack": false, 00:30:05.700 "enable_placement_id": 0, 00:30:05.700 "enable_zerocopy_send_server": true, 00:30:05.700 "enable_zerocopy_send_client": false, 00:30:05.700 "zerocopy_threshold": 0, 00:30:05.700 "tls_version": 0, 00:30:05.700 "enable_ktls": false 00:30:05.700 } 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "method": "sock_impl_set_options", 00:30:05.700 "params": { 00:30:05.700 "impl_name": "ssl", 00:30:05.700 "recv_buf_size": 4096, 00:30:05.700 "send_buf_size": 4096, 00:30:05.700 "enable_recv_pipe": true, 00:30:05.700 "enable_quickack": false, 00:30:05.700 "enable_placement_id": 0, 00:30:05.700 "enable_zerocopy_send_server": true, 00:30:05.700 "enable_zerocopy_send_client": false, 00:30:05.700 "zerocopy_threshold": 0, 00:30:05.700 "tls_version": 0, 00:30:05.700 "enable_ktls": false 00:30:05.700 } 00:30:05.700 } 00:30:05.700 ] 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "subsystem": "vmd", 00:30:05.700 "config": [] 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "subsystem": "accel", 00:30:05.700 "config": [ 00:30:05.700 { 00:30:05.700 "method": "accel_set_options", 00:30:05.700 "params": { 00:30:05.700 "small_cache_size": 128, 00:30:05.700 "large_cache_size": 16, 00:30:05.700 "task_count": 2048, 00:30:05.700 "sequence_count": 2048, 00:30:05.700 "buf_count": 2048 00:30:05.700 } 00:30:05.700 } 00:30:05.700 ] 00:30:05.700 }, 00:30:05.700 { 00:30:05.700 "subsystem": "bdev", 00:30:05.700 "config": [ 00:30:05.701 { 00:30:05.701 "method": "bdev_set_options", 00:30:05.701 "params": { 00:30:05.701 "bdev_io_pool_size": 65535, 00:30:05.701 "bdev_io_cache_size": 256, 00:30:05.701 "bdev_auto_examine": true, 00:30:05.701 "iobuf_small_cache_size": 128, 00:30:05.701 "iobuf_large_cache_size": 16 00:30:05.701 } 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "method": "bdev_raid_set_options", 00:30:05.701 "params": { 00:30:05.701 "process_window_size_kb": 1024 00:30:05.701 } 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "method": "bdev_iscsi_set_options", 00:30:05.701 "params": { 00:30:05.701 "timeout_sec": 30 00:30:05.701 } 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "method": "bdev_nvme_set_options", 00:30:05.701 "params": { 00:30:05.701 "action_on_timeout": "none", 00:30:05.701 "timeout_us": 0, 00:30:05.701 "timeout_admin_us": 0, 00:30:05.701 "keep_alive_timeout_ms": 10000, 00:30:05.701 "arbitration_burst": 0, 00:30:05.701 "low_priority_weight": 0, 00:30:05.701 "medium_priority_weight": 0, 00:30:05.701 "high_priority_weight": 0, 00:30:05.701 "nvme_adminq_poll_period_us": 10000, 00:30:05.701 "nvme_ioq_poll_period_us": 0, 00:30:05.701 "io_queue_requests": 512, 00:30:05.701 "delay_cmd_submit": true, 00:30:05.701 "transport_retry_count": 4, 00:30:05.701 "bdev_retry_count": 3, 00:30:05.701 "transport_ack_timeout": 0, 00:30:05.701 "ctrlr_loss_timeout_sec": 0, 00:30:05.701 "reconnect_delay_sec": 0, 00:30:05.701 "fast_io_fail_timeout_sec": 0, 00:30:05.701 "disable_auto_failback": false, 00:30:05.701 "generate_uuids": false, 00:30:05.701 "transport_tos": 0, 00:30:05.701 "nvme_error_stat": false, 00:30:05.701 "rdma_srq_size": 0, 00:30:05.701 "io_path_stat": false, 00:30:05.701 "allow_accel_sequence": false, 00:30:05.701 "rdma_max_cq_size": 0, 00:30:05.701 "rdma_cm_event_timeout_ms": 0, 00:30:05.701 "dhchap_digests": [ 00:30:05.701 "sha256", 00:30:05.701 "sha384", 00:30:05.701 "sha512" 00:30:05.701 ], 00:30:05.701 "dhchap_dhgroups": [ 00:30:05.701 "null", 00:30:05.701 "ffdhe2048", 00:30:05.701 "ffdhe3072", 00:30:05.701 "ffdhe4096", 00:30:05.701 "ffdhe6144", 00:30:05.701 "ffdhe8192" 00:30:05.701 ] 00:30:05.701 } 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "method": "bdev_nvme_attach_controller", 00:30:05.701 "params": { 00:30:05.701 "name": "nvme0", 00:30:05.701 "trtype": "TCP", 00:30:05.701 "adrfam": "IPv4", 00:30:05.701 "traddr": "127.0.0.1", 00:30:05.701 "trsvcid": "4420", 00:30:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.701 "prchk_reftag": false, 00:30:05.701 "prchk_guard": false, 00:30:05.701 "ctrlr_loss_timeout_sec": 0, 00:30:05.701 "reconnect_delay_sec": 0, 00:30:05.701 "fast_io_fail_timeout_sec": 0, 00:30:05.701 "psk": "key0", 00:30:05.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.701 "hdgst": false, 00:30:05.701 "ddgst": false 00:30:05.701 } 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "method": "bdev_nvme_set_hotplug", 00:30:05.701 "params": { 00:30:05.701 "period_us": 100000, 00:30:05.701 "enable": false 00:30:05.701 } 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "method": "bdev_wait_for_examine" 00:30:05.701 } 00:30:05.701 ] 00:30:05.701 }, 00:30:05.701 { 00:30:05.701 "subsystem": "nbd", 00:30:05.701 "config": [] 00:30:05.701 } 00:30:05.701 ] 00:30:05.701 }' 00:30:05.701 10:49:21 keyring_file -- keyring/file.sh@114 -- # killprocess 2896371 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2896371 ']' 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2896371 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@952 -- # uname 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2896371 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2896371' 00:30:05.701 killing process with pid 2896371 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@966 -- # kill 2896371 00:30:05.701 Received shutdown signal, test time was about 1.000000 seconds 00:30:05.701 00:30:05.701 Latency(us) 00:30:05.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.701 =================================================================================================================== 00:30:05.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.701 10:49:21 keyring_file -- common/autotest_common.sh@971 -- # wait 2896371 00:30:05.959 10:49:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=2897998 00:30:05.959 10:49:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2897998 /var/tmp/bperf.sock 00:30:05.959 10:49:21 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2897998 ']' 00:30:05.959 10:49:21 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:05.959 10:49:21 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:05.959 10:49:21 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:05.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:05.959 10:49:21 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:05.959 10:49:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:05.959 10:49:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:05.959 10:49:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:05.959 "subsystems": [ 00:30:05.959 { 00:30:05.959 "subsystem": "keyring", 00:30:05.959 "config": [ 00:30:05.959 { 00:30:05.959 "method": "keyring_file_add_key", 00:30:05.959 "params": { 00:30:05.959 "name": "key0", 00:30:05.959 "path": "/tmp/tmp.jE18gjmTU4" 00:30:05.959 } 00:30:05.959 }, 00:30:05.959 { 00:30:05.959 "method": "keyring_file_add_key", 00:30:05.959 "params": { 00:30:05.959 "name": "key1", 00:30:05.959 "path": "/tmp/tmp.nLQq7dDuDz" 00:30:05.959 } 00:30:05.959 } 00:30:05.959 ] 00:30:05.959 }, 00:30:05.959 { 00:30:05.959 "subsystem": "iobuf", 00:30:05.959 "config": [ 00:30:05.959 { 00:30:05.959 "method": "iobuf_set_options", 00:30:05.959 "params": { 00:30:05.959 "small_pool_count": 8192, 00:30:05.959 "large_pool_count": 1024, 00:30:05.959 "small_bufsize": 8192, 00:30:05.959 "large_bufsize": 135168 00:30:05.959 } 00:30:05.959 } 00:30:05.959 ] 00:30:05.959 }, 00:30:05.959 { 00:30:05.959 "subsystem": "sock", 00:30:05.959 "config": [ 00:30:05.959 { 00:30:05.959 "method": "sock_impl_set_options", 00:30:05.959 "params": { 00:30:05.959 "impl_name": "posix", 00:30:05.959 "recv_buf_size": 2097152, 00:30:05.959 "send_buf_size": 2097152, 00:30:05.959 "enable_recv_pipe": true, 00:30:05.959 "enable_quickack": false, 00:30:05.959 "enable_placement_id": 0, 00:30:05.959 "enable_zerocopy_send_server": true, 00:30:05.959 "enable_zerocopy_send_client": false, 00:30:05.959 "zerocopy_threshold": 0, 00:30:05.959 "tls_version": 0, 00:30:05.959 "enable_ktls": false 00:30:05.959 } 00:30:05.959 }, 00:30:05.959 { 00:30:05.959 "method": "sock_impl_set_options", 00:30:05.959 "params": { 00:30:05.959 "impl_name": "ssl", 00:30:05.959 "recv_buf_size": 4096, 00:30:05.959 "send_buf_size": 4096, 00:30:05.959 "enable_recv_pipe": true, 00:30:05.959 "enable_quickack": false, 00:30:05.959 "enable_placement_id": 0, 00:30:05.959 "enable_zerocopy_send_server": true, 00:30:05.959 "enable_zerocopy_send_client": false, 00:30:05.959 "zerocopy_threshold": 0, 00:30:05.959 "tls_version": 0, 00:30:05.959 "enable_ktls": false 00:30:05.959 } 00:30:05.959 } 00:30:05.959 ] 00:30:05.959 }, 00:30:05.959 { 00:30:05.959 "subsystem": "vmd", 00:30:05.959 "config": [] 00:30:05.959 }, 00:30:05.959 { 00:30:05.959 "subsystem": "accel", 00:30:05.959 "config": [ 00:30:05.959 { 00:30:05.959 "method": "accel_set_options", 00:30:05.959 "params": { 00:30:05.959 "small_cache_size": 128, 00:30:05.959 "large_cache_size": 16, 00:30:05.959 "task_count": 2048, 00:30:05.959 "sequence_count": 2048, 00:30:05.960 "buf_count": 2048 00:30:05.960 } 00:30:05.960 } 00:30:05.960 ] 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "subsystem": "bdev", 00:30:05.960 "config": [ 00:30:05.960 { 00:30:05.960 "method": "bdev_set_options", 00:30:05.960 "params": { 00:30:05.960 "bdev_io_pool_size": 65535, 00:30:05.960 "bdev_io_cache_size": 256, 00:30:05.960 "bdev_auto_examine": true, 00:30:05.960 "iobuf_small_cache_size": 128, 00:30:05.960 "iobuf_large_cache_size": 16 00:30:05.960 } 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "method": "bdev_raid_set_options", 00:30:05.960 "params": { 00:30:05.960 "process_window_size_kb": 1024 00:30:05.960 } 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "method": "bdev_iscsi_set_options", 00:30:05.960 "params": { 00:30:05.960 "timeout_sec": 30 00:30:05.960 } 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "method": "bdev_nvme_set_options", 00:30:05.960 "params": { 00:30:05.960 "action_on_timeout": "none", 00:30:05.960 "timeout_us": 0, 00:30:05.960 "timeout_admin_us": 0, 00:30:05.960 "keep_alive_timeout_ms": 10000, 00:30:05.960 "arbitration_burst": 0, 00:30:05.960 "low_priority_weight": 0, 00:30:05.960 "medium_priority_weight": 0, 00:30:05.960 "high_priority_weight": 0, 00:30:05.960 "nvme_adminq_poll_period_us": 10000, 00:30:05.960 "nvme_ioq_poll_period_us": 0, 00:30:05.960 "io_queue_requests": 512, 00:30:05.960 "delay_cmd_submit": true, 00:30:05.960 "transport_retry_count": 4, 00:30:05.960 "bdev_retry_count": 3, 00:30:05.960 "transport_ack_timeout": 0, 00:30:05.960 "ctrlr_loss_timeout_sec": 0, 00:30:05.960 "reconnect_delay_sec": 0, 00:30:05.960 "fast_io_fail_timeout_sec": 0, 00:30:05.960 "disable_auto_failback": false, 00:30:05.960 "generate_uuids": false, 00:30:05.960 "transport_tos": 0, 00:30:05.960 "nvme_error_stat": false, 00:30:05.960 "rdma_srq_size": 0, 00:30:05.960 "io_path_stat": false, 00:30:05.960 "allow_accel_sequence": false, 00:30:05.960 "rdma_max_cq_size": 0, 00:30:05.960 "rdma_cm_event_timeout_ms": 0, 00:30:05.960 "dhchap_digests": [ 00:30:05.960 "sha256", 00:30:05.960 "sha384", 00:30:05.960 "sha512" 00:30:05.960 ], 00:30:05.960 "dhchap_dhgroups": [ 00:30:05.960 "null", 00:30:05.960 "ffdhe2048", 00:30:05.960 "ffdhe3072", 00:30:05.960 "ffdhe4096", 00:30:05.960 "ffdhe6144", 00:30:05.960 "ffdhe8192" 00:30:05.960 ] 00:30:05.960 } 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "method": "bdev_nvme_attach_controller", 00:30:05.960 "params": { 00:30:05.960 "name": "nvme0", 00:30:05.960 "trtype": "TCP", 00:30:05.960 "adrfam": "IPv4", 00:30:05.960 "traddr": "127.0.0.1", 00:30:05.960 "trsvcid": "4420", 00:30:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.960 "prchk_reftag": false, 00:30:05.960 "prchk_guard": false, 00:30:05.960 "ctrlr_loss_timeout_sec": 0, 00:30:05.960 "reconnect_delay_sec": 0, 00:30:05.960 "fast_io_fail_timeout_sec": 0, 00:30:05.960 "psk": "key0", 00:30:05.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.960 "hdgst": false, 00:30:05.960 "ddgst": false 00:30:05.960 } 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "method": "bdev_nvme_set_hotplug", 00:30:05.960 "params": { 00:30:05.960 "period_us": 100000, 00:30:05.960 "enable": false 00:30:05.960 } 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "method": "bdev_wait_for_examine" 00:30:05.960 } 00:30:05.960 ] 00:30:05.960 }, 00:30:05.960 { 00:30:05.960 "subsystem": "nbd", 00:30:05.960 "config": [] 00:30:05.960 } 00:30:05.960 ] 00:30:05.960 }' 00:30:05.960 [2024-05-15 10:49:21.807591] Starting SPDK v24.05-pre git sha1 0e4f7fc9b / DPDK 23.11.0 initialization... 00:30:05.960 [2024-05-15 10:49:21.807709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897998 ] 00:30:06.218 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.218 [2024-05-15 10:49:21.916587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.218 [2024-05-15 10:49:22.006039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.477 [2024-05-15 10:49:22.220559] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:06.739 10:49:22 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:06.739 10:49:22 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:30:06.739 10:49:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:06.739 10:49:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:06.739 10:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:06.999 10:49:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:06.999 10:49:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:06.999 10:49:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:06.999 10:49:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:06.999 10:49:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:07.257 10:49:22 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:07.257 10:49:22 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:07.257 10:49:22 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:07.257 10:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:07.257 10:49:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:07.257 10:49:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:07.257 10:49:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jE18gjmTU4 /tmp/tmp.nLQq7dDuDz 00:30:07.257 10:49:23 keyring_file -- keyring/file.sh@20 -- # killprocess 2897998 00:30:07.257 10:49:23 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2897998 ']' 00:30:07.258 10:49:23 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2897998 00:30:07.258 10:49:23 keyring_file -- common/autotest_common.sh@952 -- # uname 00:30:07.258 10:49:23 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:07.258 10:49:23 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2897998 00:30:07.514 10:49:23 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:07.514 10:49:23 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:07.514 10:49:23 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2897998' 00:30:07.514 killing process with pid 2897998 00:30:07.514 10:49:23 keyring_file -- common/autotest_common.sh@966 -- # kill 2897998 00:30:07.514 Received shutdown signal, test time was about 1.000000 seconds 00:30:07.514 00:30:07.514 Latency(us) 00:30:07.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.514 =================================================================================================================== 00:30:07.514 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:07.514 10:49:23 keyring_file -- common/autotest_common.sh@971 -- # wait 2897998 00:30:07.771 10:49:23 keyring_file -- keyring/file.sh@21 -- # killprocess 2896351 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2896351 ']' 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2896351 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@952 -- # uname 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2896351 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2896351' 00:30:07.771 killing process with pid 2896351 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@966 -- # kill 2896351 00:30:07.771 [2024-05-15 10:49:23.553773] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:07.771 [2024-05-15 10:49:23.553827] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:07.771 10:49:23 keyring_file -- common/autotest_common.sh@971 -- # wait 2896351 00:30:08.707 00:30:08.707 real 0m11.353s 00:30:08.707 user 0m25.193s 00:30:08.707 sys 0m2.602s 00:30:08.707 10:49:24 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:08.707 10:49:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:08.707 ************************************ 00:30:08.707 END TEST keyring_file 00:30:08.707 ************************************ 00:30:08.707 10:49:24 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:30:08.707 10:49:24 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:08.707 10:49:24 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:30:08.707 10:49:24 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:08.707 10:49:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:08.707 10:49:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:08.707 10:49:24 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:30:08.707 10:49:24 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:30:08.707 10:49:24 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:08.707 10:49:24 -- common/autotest_common.sh@10 -- # set +x 00:30:08.707 10:49:24 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:30:08.707 10:49:24 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:30:08.707 10:49:24 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:30:08.707 10:49:24 -- common/autotest_common.sh@10 -- # set +x 00:30:13.981 INFO: APP EXITING 00:30:13.981 INFO: killing all VMs 00:30:13.981 INFO: killing vhost app 00:30:13.981 INFO: EXIT DONE 00:30:16.516 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:30:16.516 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:30:16.774 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:30:16.774 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:30:20.075 Cleaning 00:30:20.075 Removing: /var/run/dpdk/spdk0/config 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:20.075 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:20.075 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:20.075 Removing: /var/run/dpdk/spdk1/config 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:20.076 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:20.076 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:20.076 Removing: /var/run/dpdk/spdk2/config 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:20.076 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:20.076 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:20.076 Removing: /var/run/dpdk/spdk3/config 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:20.076 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:20.076 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:20.076 Removing: /var/run/dpdk/spdk4/config 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:20.076 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:20.076 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:20.076 Removing: /dev/shm/nvmf_trace.0 00:30:20.076 Removing: /dev/shm/spdk_tgt_trace.pid2465763 00:30:20.076 Removing: /var/run/dpdk/spdk0 00:30:20.076 Removing: /var/run/dpdk/spdk1 00:30:20.076 Removing: /var/run/dpdk/spdk2 00:30:20.076 Removing: /var/run/dpdk/spdk3 00:30:20.076 Removing: /var/run/dpdk/spdk4 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2463563 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2465763 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2466533 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2467744 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2468058 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2469291 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2469325 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2469779 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2471330 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2472326 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2472685 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2473064 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2473694 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2474054 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2474380 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2474695 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2475040 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2475750 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2479727 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2480066 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2480400 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2480632 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2481335 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2481600 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2482261 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2482550 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2482888 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2483013 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2483369 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2483525 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2484228 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2484539 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2484912 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2487080 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2488812 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2490637 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2492661 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2494520 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2496368 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2498389 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2500182 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2502262 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2504055 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2506042 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2507927 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2509722 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2511717 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2514147 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2515961 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2518016 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2519817 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2521813 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2523686 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2525490 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2527555 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2529353 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2531193 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2533650 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2537800 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2589233 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2594056 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2605697 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2611964 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2616570 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2617762 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2628717 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2629148 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2633951 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2640818 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2643558 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2655589 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2665894 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2668010 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2669010 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2689392 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2693899 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2719431 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2724932 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2726868 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2728966 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2729253 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2729556 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2729862 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2730614 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2732858 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2734027 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2734628 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2737164 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2738069 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2738989 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2743810 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2750374 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2755113 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2763627 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2763634 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2768729 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2769033 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2769325 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2769747 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2769863 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2774845 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2775567 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2781217 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2784453 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2790505 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2796618 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2806019 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2814367 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2814424 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2835572 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2837358 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2839383 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2841205 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2844498 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2845110 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2845999 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2846767 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2848131 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2848747 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2849634 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2850249 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2851723 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2859087 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2859102 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2864800 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2867331 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2869947 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2871411 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2874241 00:30:20.076 Removing: /var/run/dpdk/spdk_pid2876266 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2886254 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2886937 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2887643 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2890607 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2891217 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2891793 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2896351 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2896371 00:30:20.077 Removing: /var/run/dpdk/spdk_pid2897998 00:30:20.077 Clean 00:30:20.077 10:49:35 -- common/autotest_common.sh@1448 -- # return 0 00:30:20.077 10:49:35 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:30:20.077 10:49:35 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:20.077 10:49:35 -- common/autotest_common.sh@10 -- # set +x 00:30:20.077 10:49:35 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:30:20.077 10:49:35 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:20.077 10:49:35 -- common/autotest_common.sh@10 -- # set +x 00:30:20.077 10:49:35 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:30:20.077 10:49:35 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:30:20.077 10:49:35 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:30:20.077 10:49:35 -- spdk/autotest.sh@387 -- # hash lcov 00:30:20.077 10:49:35 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:20.077 10:49:35 -- spdk/autotest.sh@389 -- # hostname 00:30:20.077 10:49:35 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-03 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:30:20.337 geninfo: WARNING: invalid characters removed from testname! 00:30:42.348 10:49:56 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:42.916 10:49:58 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:44.292 10:49:59 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:45.667 10:50:01 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:47.040 10:50:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:48.414 10:50:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:49.791 10:50:05 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:49.791 10:50:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:49.791 10:50:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:49.791 10:50:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.791 10:50:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.791 10:50:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.791 10:50:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.791 10:50:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.791 10:50:05 -- paths/export.sh@5 -- $ export PATH 00:30:49.791 10:50:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.791 10:50:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:30:49.791 10:50:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:49.791 10:50:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715763005.XXXXXX 00:30:49.791 10:50:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715763005.wwRJ3b 00:30:49.791 10:50:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:49.791 10:50:05 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:30:49.791 10:50:05 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:30:49.791 10:50:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:49.791 10:50:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:49.791 10:50:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:49.791 10:50:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:49.791 10:50:05 -- common/autotest_common.sh@10 -- $ set +x 00:30:49.791 10:50:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:30:49.791 10:50:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:49.791 10:50:05 -- pm/common@17 -- $ local monitor 00:30:49.791 10:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.791 10:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.791 10:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.791 10:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.791 10:50:05 -- pm/common@25 -- $ sleep 1 00:30:49.791 10:50:05 -- pm/common@21 -- $ date +%s 00:30:49.791 10:50:05 -- pm/common@21 -- $ date +%s 00:30:49.791 10:50:05 -- pm/common@21 -- $ date +%s 00:30:49.791 10:50:05 -- pm/common@21 -- $ date +%s 00:30:49.791 10:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763005 00:30:49.792 10:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763005 00:30:49.792 10:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763005 00:30:49.792 10:50:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763005 00:30:49.792 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763005_collect-vmstat.pm.log 00:30:49.792 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763005_collect-cpu-temp.pm.log 00:30:49.792 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763005_collect-cpu-load.pm.log 00:30:49.792 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763005_collect-bmc-pm.bmc.pm.log 00:30:50.734 10:50:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:50.734 10:50:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:30:50.734 10:50:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:30:50.734 10:50:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:50.734 10:50:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:50.734 10:50:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:50.734 10:50:06 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:50.734 10:50:06 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:50.734 10:50:06 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:30:50.734 10:50:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:50.734 10:50:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:50.734 10:50:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:50.734 10:50:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:50.734 10:50:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:50.734 10:50:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:50.734 10:50:06 -- pm/common@44 -- $ pid=2909031 00:30:50.734 10:50:06 -- pm/common@50 -- $ kill -TERM 2909031 00:30:50.734 10:50:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:50.734 10:50:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:50.734 10:50:06 -- pm/common@44 -- $ pid=2909032 00:30:50.734 10:50:06 -- pm/common@50 -- $ kill -TERM 2909032 00:30:50.734 10:50:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:50.734 10:50:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:50.734 10:50:06 -- pm/common@44 -- $ pid=2909034 00:30:50.734 10:50:06 -- pm/common@50 -- $ kill -TERM 2909034 00:30:50.734 10:50:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:50.734 10:50:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:50.734 10:50:06 -- pm/common@44 -- $ pid=2909062 00:30:50.734 10:50:06 -- pm/common@50 -- $ sudo -E kill -TERM 2909062 00:30:50.734 + [[ -n 2352981 ]] 00:30:50.734 + sudo kill 2352981 00:30:50.744 [Pipeline] } 00:30:50.764 [Pipeline] // stage 00:30:50.771 [Pipeline] } 00:30:50.792 [Pipeline] // timeout 00:30:50.798 [Pipeline] } 00:30:50.817 [Pipeline] // catchError 00:30:50.822 [Pipeline] } 00:30:50.839 [Pipeline] // wrap 00:30:50.846 [Pipeline] } 00:30:50.862 [Pipeline] // catchError 00:30:50.871 [Pipeline] stage 00:30:50.874 [Pipeline] { (Epilogue) 00:30:50.888 [Pipeline] catchError 00:30:50.890 [Pipeline] { 00:30:50.905 [Pipeline] echo 00:30:50.906 Cleanup processes 00:30:50.911 [Pipeline] sh 00:30:51.198 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:30:51.198 2909539 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:30:51.216 [Pipeline] sh 00:30:51.506 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:30:51.506 ++ grep -v 'sudo pgrep' 00:30:51.506 ++ awk '{print $1}' 00:30:51.506 + sudo kill -9 00:30:51.506 + true 00:30:51.517 [Pipeline] sh 00:30:51.801 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:01.810 [Pipeline] sh 00:31:02.096 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:02.096 Artifacts sizes are good 00:31:02.110 [Pipeline] archiveArtifacts 00:31:02.117 Archiving artifacts 00:31:02.299 [Pipeline] sh 00:31:02.591 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:31:02.607 [Pipeline] cleanWs 00:31:02.618 [WS-CLEANUP] Deleting project workspace... 00:31:02.618 [WS-CLEANUP] Deferred wipeout is used... 00:31:02.625 [WS-CLEANUP] done 00:31:02.627 [Pipeline] } 00:31:02.652 [Pipeline] // catchError 00:31:02.666 [Pipeline] sh 00:31:02.951 + logger -p user.info -t JENKINS-CI 00:31:02.958 [Pipeline] } 00:31:02.972 [Pipeline] // stage 00:31:02.977 [Pipeline] } 00:31:02.994 [Pipeline] // node 00:31:02.999 [Pipeline] End of Pipeline 00:31:03.031 Finished: SUCCESS